Search results for: inherent feature
143 Classification of Coughing and Breathing Activities Using Wearable and a Light-Weight DL Model
Authors: Subham Ghosh, Arnab Nandi
Abstract:
Background: The proliferation of Wireless Body Area Networks (WBAN) and Internet of Things (IoT) applications demonstrates the potential for continuous monitoring of physical changes in the body. These technologies are vital for health monitoring tasks, such as identifying coughing and breathing activities, which are necessary for disease diagnosis and management. Monitoring activities such as coughing and deep breathing can provide valuable insights into a variety of medical issues. Wearable radio-based antenna sensors, which are lightweight and easy to incorporate into clothing or portable goods, provide continuous monitoring. This mobility gives it a substantial advantage over stationary environmental sensors like as cameras and radar, which are constrained to certain places. Furthermore, using compressive techniques provides benefits such as reduced data transmission speeds and memory needs. These wearable sensors offer more advanced and diverse health monitoring capabilities. Methodology: This study analyzes the feasibility of using a semi-flexible antenna operating at 2.4 GHz (ISM band) and positioned around the neck and near the mouth to identify three activities: coughing, deep breathing, and idleness. Vector network analyzer (VNA) is used to collect time-varying complex reflection coefficient data from perturbed antenna nearfield. The reflection coefficient (S11) conveys nuanced information caused by simultaneous variations in the nearfield radiation of three activities across time. The signatures are sparsely represented with gaussian windowed Gabor spectrograms. The Gabor spectrogram is used as a sparse representation approach, which reassigns the ridges of the spectrogram images to improve their resolution and focus on essential components. The antenna is biocompatible in terms of specific absorption rate (SAR). The sparsely represented Gabor spectrogram pictures are fed into a lightweight deep learning (DL) model for feature extraction and classification. Two antenna locations are investigated in order to determine the most effective localization for three different activities. Findings: Cross-validation techniques were used on data from both locations. Due to the complex form of the recorded S11, separate analyzes and assessments were performed on the magnitude, phase, and their combination. The combination of magnitude and phase fared better than the separate analyses. Various sliding window sizes, ranging from 1 to 5 seconds, were tested to find the best window for activity classification. It was discovered that a neck-mounted design was effective at detecting the three unique behaviors.Keywords: activity recognition, antenna, deep-learning, time-frequency
Procedia PDF Downloads 2142 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 93141 The Superior Performance of Investment Bank-Affiliated Mutual Funds
Authors: Michelo Obrey
Abstract:
Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank
Procedia PDF Downloads 187140 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks
Authors: Afnan Al-Romi, Iman Al-Momani
Abstract:
The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN
Procedia PDF Downloads 322139 Impaired Transient Receptor Potential Vanilloid 4-Mediated Dilation of Mesenteric Arteries in Spontaneously Hypertensive Rats
Authors: Ammar Boudaka, Maryam Al-Suleimani, Hajar BaOmar, Intisar Al-Lawati, Fahad Zadjali
Abstract:
Background: Hypertension is increasingly becoming a matter of medical and public health importance. The maintenance of normal blood pressure requires a balance between cardiac output and total peripheral resistance. The endothelium, through the release of vasodilating factors, plays an important role in the control of total peripheral resistance and hence blood pressure homeostasis. Transient Receptor Potential Vanilloid type 4 (TRPV4) is a mechanosensitive non-selective cation channel that is expressed on the endothelium and contributes to endothelium-mediated vasodilation. So far, no data are available about the morphological and functional status of this channel in hypertensive cases. Objectives: This study aimed to investigate whether there is any difference in the morphological and functional features of TRPV4 in the mesenteric artery of normotensive and hypertensive rats. Methods: Functional feature of TRPV4 in four experimental animal groups: young and adult Wistar-Kyoto rats (WKY-Y and WKY-A), young and adult spontaneously hypertensive rats (SHR-Y and SHR-A), was studied by adding 5 µM 4αPDD (TRPV4 agonist) to mesenteric arteries mounted in a four-chamber wire myograph and pre-contracted with 4 µM phenylephrine. The 4αPDD-induced response was investigated in the presence and absence of 1 µM HC067047 (TRPV4 antagonist), 100 µM L-NAME (nitric oxide synthase inhibitor), and endothelium. The morphological distribution of TRPV4 in the wall of rat mesenteric arteries was investigated by immunostaining. Real-time PCR was used in order to investigate mRNA expression level of TRPV4 in the mesenteric arteries of the four groups. The collected data were expressed as mean ± S.E.M. with n equal to the number of animals used (one vessel was taken from each rat). To determine the level of significance, statistical comparisons were performed using the student’s t-test and considered to be significantly different at p<0.05. Results: 4αPDD induced a relaxation response in the mesenteric arterial preparations (WKY-Y: 85.98% ± 4.18; n = 5) that was markedly inhibited by HC067047 (18.30% ± 2.86; n= 5; p<0.05), endothelium removal (19.93% ± 1.50; n = 5; p<0.05) and L-NAME (28.18% ± 3.09; n = 5; p<0.05). The 4αPDD-induced relaxation was significantly lower in SHR-Y compared to WKY-Y (SHR-Y: 70.96% ± 3.65; n = 6, WKY-Y: 85.98% ± 4.18; n = 5-6, p<0.05. Moreover, the 4αPDD-induced response was significantly lower in WKY-A than WKY-Y (WKY-A: 75.58 ± 1.30; n = 5, WKY-Y: 85.98% ± 4.18; n = 5, p<0.05). Immunostaining study showed immunofluorescent signal confined to the endothelial layer of the mesenteric arteries. The expression of TRPV4 mRNA in SHR-Y was significantly lower than in WKY-Y (SHR-Y; 0.67RU ± 0.34; n = 4, WKY-Y: 2.34RU ± 0.15; n = 4, p<0.05). Furthermore, TRPV4 mRNA expression in WKY-A was lower than its expression in WKY-Y (WKY-A: 0.62RU ± 0.37; n = 4, WKY-Y: 2.34RU ± 0.15; n = 4, p<0.05). Conclusion: Stimulation of TRPV4, which is expressed on the endothelium of rat mesenteric artery, triggers an endothelium-mediated relaxation response that markedly decreases with hypertension and growing up changes due to downregulation of TRPV4 expression.Keywords: hypertension, endothelium, mesenteric artery, TRPV4
Procedia PDF Downloads 313138 Augusto De Campos Translator: The Role of Translation in Brazilian Concrete Poetry Project
Authors: Juliana C. Salvadori, Jose Carlos Felix
Abstract:
This paper aims at discussing the role literary translation has played in Brazilian Concrete Poetry Movement – an aesthetic, critical and pedagogical project which conceived translation as poiesis, i.e., as both creative and critic work in which the potency (dynamic) of literary work is unfolded in the interpretive and critic act (energeia) the translating practice demands. We argue that translation, for concrete poets, is conceived within the framework provided by the reinterpretation –or deglutition– of Oswald de Andrade’s anthropophagy – a carefully selected feast from which the poets pick and model their Paideuma. As a case study, we propose to approach and analyze two of Augusto de Campos’s long-term translation projects: the translation of Emily Dickinson’s and E. E. Cummings’s works to Brazilian readers. Augusto de Campos is a renowned poet, translator, critic and one of the founding members of Brazilian Concrete Poetry movement. Since the 1950s he has produced a consistent body of translated poetry from English-speaking poets in which the translator has explored creative translation processes – transcreation, as concrete poets have named it. Campos’s translation project regarding E. E. Cummings’s poetry comprehends a span of forty years: it begins in 1956 with 10 poems and unfolds in 4 works – 20 poem(a)s, 40 poem(a)s, Poem(a)s, re-edited in 2011. His translations of Dickinson’s poetry are published in two works: O Anticrítico (1986), in which he translated 10 poems, and Emily Dickinson Não sou Ninguém (2008), in which the poet-translator added 35 more translated poems. Both projects feature bilingual editions: contrary to common sense, Campos translations aim at being read as such: the target readers, to fully enjoy the experience, must be proficient readers of English and, also, acquainted with the poets in translation – Campos expects us to perform translation criticism, as Antoine Berman has proposed, by assessing the choices he, as both translator and poet, has presented in order to privilege aesthetic information (verse lines, word games, etc.). To readers not proficient in English, his translations play a pedagogycal role of educating and preparing them to read both the target poet works as well as concrete poetry works – the detailed essays and prefaces in which the translator emphasizes the selection of works translated and strategies adopted enlighten his project as translator: for Cummings, it has led to the oblieraton of the more traditional and lyrical/romantic examples of his poetry while highlighting the more experimental aspects and poems; for Dickinson, his project has highligthed the more hermetic traits of her poems. To the domestic canons of both poets in Brazilian literary system, we analyze Campos’ contribution in this work.Keywords: translation criticism, Augusto de Campos, E. E. Cummings, Emily Dickinson
Procedia PDF Downloads 293137 Making Meaning, Authenticity, and Redefining a Future in Former Refugees and Asylum Seekers Detained in Australia
Authors: Lynne McCormack, Andrew Digges
Abstract:
Since 2013, the Australian government has enforced mandatory detention of anyone arriving in Australia without a valid visa, including those subsequently identified as a refugee or seeking asylum. While consistent with the increased use of immigration detention internationally, Australia’s use of offshore processing facilities both during and subsequent to refugee status determination processing has until recently remained a unique feature of Australia’s program of deterrence. The commonplace detention of refugees and asylum seekers following displacement is a significant and independent source of trauma and a contributory factor in adverse psychological outcomes. Officially, these individuals have no prospect of resettlement in Australia, are barred from applying for substantive visas, and are frequently and indefinitely detained in closed facilities such as immigration detention centres, or alternative places of detention, including hotels. It is also important to note that the limited access to Australia’s immigration detention population made available to researchers often means that data available for secondary analysis may be incomplete or delayed in its release. Further, studies into the lived experience of refugees and asylum seekers are typically cross-sectional and convenience sampled, employing a variety of designs and research methodologies that limit comparability and focused on the immediacy of the individual’s experience. Consequently, how former detainees make sense of their experience, redefine their future trajectory upon release, and recover a sense of authenticity and purpose, is unknown. As such, the present study sought the positive and negative subjective interpretations of 6 participants in Australia regarding their lived experiences as refugees and asylum seekers within Australia’s immigration detention system and its impact on their future sense of self. It made use of interpretative phenomenological analysis (IPA), a qualitative research methodology that is interested in how individuals make sense of, and ascribe meaning to, their unique lived experiences of phenomena. Underpinned by phenomenology, hermeneutics, and critical realism, this idiographic study aimed to explore both positive and negative subjective interpretations of former refugees and asylum seekers held in detention in Australia. It sought to understand how they make sense of their experiences, how detention has impacted their overall journey as displaced persons, and how they have moved forward in the aftermath of protracted detention in Australia. Examining the unique lived experiences of previously detained refugees and asylum seekers may inform the future development of theoretical models of posttraumatic growth among this vulnerable population, thereby informing the delivery of future mental health and resettlement services.Keywords: mandatory detention, refugee, asylum seeker, authenticity, Interpretative phenomenological analysis
Procedia PDF Downloads 93136 Superlyophobic Surfaces for Increased Heat Transfer during Condensation of CO₂
Authors: Ingrid Snustad, Asmund Ervik, Anders Austegard, Amy Brunsvold, Jianying He, Zhiliang Zhang
Abstract:
CO₂ capture, transport and storage (CCS) is essential to mitigate global anthropogenic CO₂ emissions. To make CCS a widely implemented technology in, e.g. the power sector, the reduction of costs is crucial. For a large cost reduction, every part of the CCS chain must contribute. By increasing the heat transfer efficiency during liquefaction of CO₂, which is a necessary step, e.g. ship transportation, the costs associated with the process are reduced. Heat transfer rates during dropwise condensation are up to one order of magnitude higher than during filmwise condensation. Dropwise condensation usually occurs on a non-wetting surface (Superlyophobic surface). The vapour condenses in discrete droplets, and the non-wetting nature of the surface reduces the adhesion forces and results in shedding of condensed droplets. This, again, results in fresh nucleation sites for further droplet condensation, effectively increasing the liquefaction efficiency. In addition, the droplets in themselves have a smaller heat transfer resistance than a liquid film, resulting in increased heat transfer rates from vapour to solid. Surface tension is a crucial parameter for dropwise condensation, due to its impact on the solid-liquid contact angle. A low surface tension usually results in a low contact angle, and again to spreading of the condensed liquid on the surface. CO₂ has very low surface tension compared to water. However, at relevant temperatures and pressures for CO₂ condensation, the surface tension is comparable to organic compounds such as pentane, a dropwise condensation of CO₂ is a completely new field of research. Therefore, knowledge of several important parameters such as contact angle and drop size distribution must be gained in order to understand the nature of the condensation. A new setup has been built to measure these relevant parameters. The main parts of the experimental setup is a pressure chamber in which the condensation occurs, and a high- speed camera. The process of CO₂ condensation is visually monitored, and one can determine the contact angle, contact angle hysteresis and hence, the surface adhesion of the liquid. CO₂ condensation on different surfaces can be analysed, e.g. copper, aluminium and stainless steel. The experimental setup is built for accurate measurements of the temperature difference between the surface and the condensing vapour and accurate pressure measurements in the vapour. The temperature will be measured directly underneath the condensing surface. The next step of the project will be to fabricate nanostructured surfaces for inducing superlyophobicity. Roughness is a key feature to achieve contact angles above 150° (limit for superlyophobicity) and controlled, and periodical roughness on the nanoscale is beneficial. Surfaces that are non- wetting towards organic non-polar liquids are candidates surface structures for dropwise condensation of CO₂.Keywords: CCS, dropwise condensation, low surface tension liquid, superlyophobic surfaces
Procedia PDF Downloads 276135 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 79134 Stability Study of Hydrogel Based on Sodium Alginate/Poly (Vinyl Alcohol) with Aloe Vera Extract for Wound Dressing Application
Authors: Klaudia Pluta, Katarzyna Bialik-Wąs, Dagmara Malina, Mateusz Barczewski
Abstract:
Hydrogel networks, due to their unique properties, are highly attractive materials for wound dressing. The three-dimensional structure of hydrogels provides tissues with optimal moisture, which supports the wound healing process. Moreover, a characteristic feature of hydrogels is their absorption properties which allow for the absorption of wound exudates. For the fabrication of biomedical hydrogels, a combination of natural polymers ensuring biocompatibility and synthetic ones that provide adequate mechanical strength are often used. Sodium alginate (SA) is one of the polymers widely used in wound dressing materials because it exhibits excellent biocompatibility and biodegradability. However, due to poor strength properties, often alginate-based hydrogel materials are enhanced by the addition of another polymer such as poly(vinyl alcohol) (PVA). This paper is concentrated on the preparation methods of sodium alginate/polyvinyl alcohol hydrogel system incorporating Aloe vera extract and glycerin for wound healing material with particular focus on the role of their composition on structure, thermal properties, and stability. Briefly, the hydrogel preparation is based on the chemical cross-linking method using poly(ethylene glycol) diacrylate (PEGDA, Mn = 700 g/mol) as a crosslinking agent and ammonium persulfate as an initiator. In vitro degradation tests of SA/PVA/AV hydrogels were carried out in Phosphate-Buffered Saline (pH – 7.4) as well as in distilled water. Hydrogel samples were firstly cut into half-gram pieces (in triplicate) and immersed in immersion fluid. Then, all specimens were incubated at 37°C and then the pH and conductivity values were measurements at time intervals. The post-incubation fluids were analyzed using SEC/GPC to check the content of oligomers. The separation was carried out at 35°C on a poly(hydroxy methacrylate) column (dimensions 300 x 8 mm). 0.1M NaCl solution, whose flow rate was 0.65 ml/min, was used as the mobile phase. Three injections with a volume of 50 µl were made for each sample. The thermogravimetric data of the prepared hydrogels were collected using a Netzsch TG 209 F1 Libra apparatus. The samples with masses of about 10 mg were weighed separately in Al2O3 crucibles and then were heated from 30°C to 900°C with a scanning rate of 10 °C∙min−1 under a nitrogen atmosphere. Based on the conducted research, a fast and simple method was developed to produce potential wound dressing material containing sodium alginate, poly(vinyl alcohol) and Aloe vera extract. As a result, transparent and flexible SA/PVA/AV hydrogels were obtained. The degradation experiments indicated that most of the samples immersed in PBS as well as in distilled water were not degraded throughout the whole incubation time.Keywords: hydrogels, wound dressings, sodium alginate, poly(vinyl alcohol)
Procedia PDF Downloads 164133 Evolution of Microstructure through Phase Separation via Spinodal Decomposition in Spinel Ferrite Thin Films
Authors: Nipa Debnath, Harinarayan Das, Takahiko Kawaguchi, Naonori Sakamoto, Kazuo Shinozaki, Hisao Suzuki, Naoki Wakiya
Abstract:
Nowadays spinel ferrite magnetic thin films have drawn considerable attention due to their interesting magnetic and electrical properties with enhanced chemical and thermal stability. Spinel ferrite magnetic films can be implemented in magnetic data storage, sensors, and spin filters or microwave devices. It is well established that the structural, magnetic and transport properties of the magnetic thin films are dependent on microstructure. Spinodal decomposition (SD) is a phase separation process, whereby a material system is spontaneously separated into two phases with distinct compositions. The periodic microstructure is the characteristic feature of SD. Thus, SD can be exploited to control the microstructure at the nanoscale level. In bulk spinel ferrites having general formula, MₓFe₃₋ₓ O₄ (M= Co, Mn, Ni, Zn), phase separation via SD has been reported only for cobalt ferrite (CFO); however, long time post-annealing is required to occur the spinodal decomposition. We have found that SD occurs in CoF thin film without using any post-deposition annealing process if we apply magnetic field during thin film growth. Dynamic Aurora pulsed laser deposition (PLD) is a specially designed PLD system through which in-situ magnetic field (up to 2000 G) can be applied during thin film growth. The in-situ magnetic field suppresses the recombination of ions in the plume. In addition, the peak’s intensity of the ions in the spectra of the plume also increases when magnetic field is applied to the plume. As a result, ions with high kinetic energy strike into the substrate. Thus, ion-impingement occurred under magnetic field during thin film growth. The driving force of SD is the ion-impingement towards the substrates that is induced by in-situ magnetic field. In this study, we report about the occurrence of phase separation through SD and evolution of microstructure after phase separation in spinel ferrite thin films. The surface morphology of the phase separated films show checkerboard like domain structure. The cross-sectional microstructure of the phase separated films reveal columnar type phase separation. Herein, the decomposition wave propagates in lateral direction which has been confirmed from the lateral composition modulations in spinodally decomposed films. Large magnetic anisotropy has been found in spinodally decomposed nickel ferrite (NFO) thin films. This approach approves that magnetic field is also an important thermodynamic parameter to induce phase separation by the enhancement of up-hill diffusion in thin films. This thin film deposition technique could be a more efficient alternative for the fabrication of self-organized phase separated thin films and employed in controlling of the microstructure at nanoscale level.Keywords: Dynamic Aurora PLD, magnetic anisotropy, spinodal decomposition, spinel ferrite thin film
Procedia PDF Downloads 365132 Obtaining Composite Cotton Fabric by Cyclodextrin Grafting
Authors: U. K. Sahin, N. Erdumlu, C. Saricam, I. Gocek, M. H. Arslan, H. Acikgoz-Tufan, B. Kalav
Abstract:
Finishing is an important part of fabric processing with which a wide range of features are imparted to greige or colored fabrics for various end-uses. Especially, by the addition or impartation of nano-scaled particles to the fabric structure composite fabrics, a kind of composite materials can be acquired. Composite materials, generally shortened as composites or in other words composition materials, are engineered or naturally occurring materials made from two or more component materials with significantly different physical, mechanical or chemical characteristics remaining separate and distinctive at the macroscopic or microscopic scale within the end product structure. Therefore, the technique finishing which is one of the fundamental methods to be applied on fabrics for obtainment of composite fabrics with many functionalities was used in the current study with the same purpose. However, regardless of the finishing materials applied, the efficient life of finished product on offering desired feature is low, since the durability of finishes on the material is limited. Any increase in durability of these finishes on textiles would enhance the life of use for textiles, which will result in happier users. Therefore, in this study, since higher durability was desired for the finishing materials fixed on the fabrics, nano-scaled hollow structured cyclodextrins were chemically imparted by grafting to the structure of conventional cotton fabrics by the help of finishing technique in order to be fixed permanently. By this way, a processed and functionalized base fabric having potential to be treated in the subsequent processes with many different finishing agents and nanomaterials could be obtained. Henceforth, this fabric can be used as a multi-functional fabric due to the encapturing ability of cyclodextrins to molecules/particles via physical/chemical means. In this study, scoured and rinsed woven bleached plain weave 100% cotton fabrics were utilized because textiles made of cotton are the most demanded textile products in the textile market by the textile consumers in daily life. Cotton fabric samples were immersed in treating baths containing β-cyclodextrin and 1,2,3,4-butanetetracarboxylic acid and to reduce the curing temperature the catalyst sodium hypophosphite monohydrate was used. All impregnated fabric samples were pre-dried. The reaction of grafting was performed in dry state. The treated and cured fabric samples were rinsed with warm distilled water and dried. The samples were dried for 4 h and weighed before and after finishing and rinsing. Stability and durability of β-cyclodextrins on fabric surface against external factors such as washing as well as strength of functionalized fabric in terms of tensile and tear strength were tested. Presence and homogeneity of distribution of β-cyclodextrins on fabric surface were characterized.Keywords: cotton fabric, cyclodextrine, improved durability, multifunctional composite textile
Procedia PDF Downloads 295131 Fabrication of SnO₂ Nanotube Arrays for Enhanced Gas Sensing Properties
Authors: Hsyi-En Cheng, Ying-Yi Liou
Abstract:
Metal-oxide semiconductor (MOS) gas sensors are widely used in the gas-detection market due to their high sensitivity, fast response, and simple device structures. However, the high working temperature of MOS gas sensors makes them difficult to integrate with the appliance or consumer goods. One-dimensional (1-D) nanostructures are considered to have the potential to lower their working temperature due to their large surface-to-volume ratio, confined electrical conduction channels, and small feature sizes. Unfortunately, the difficulty of fabricating 1-D nanostructure electrodes has hindered the development of low-temperature MOS gas sensors. In this work, we proposed a method to fabricate nanotube-arrays, and the SnO₂ nanotube-array sensors with different wall thickness were successfully prepared and examined. The fabrication of SnO₂ nanotube arrays incorporates the techniques of barrier-free anodic aluminum oxide (AAO) template and atomic layer deposition (ALD) of SnO₂. First, 1.0 µm Al film was deposited on ITO glass substrate by electron beam evaporation and then anodically oxidized by five wt% phosphoric acid solution at 5°C under a constant voltage of 100 V to form porous aluminum oxide. As the Al film was fully oxidized, a 15 min over anodization and a 30 min post chemical dissolution were used to remove the barrier oxide at the bottom end of pores to generate a barrier-free AAO template. The ALD using reactants of TiCl4 and H₂O was followed to grow a thin layer of SnO₂ on the template to form SnO₂ nanotube arrays. After removing the surface layer of SnO₂ by H₂ plasma and dissolving the template by 5 wt% phosphoric acid solution at 50°C, upright standing SnO₂ nanotube arrays on ITO glass were produced. Finally, Ag top electrode with line width of 5 μm was printed on the nanotube arrays to form SnO₂ nanotube-array sensor. Two SnO₂ nanotube-arrays with wall thickness of 30 and 60 nm were produced in this experiment for the evaluation of gas sensing ability. The flat SnO₂ films with thickness of 30 and 60 nm were also examined for comparison. The results show that the properties of ALD SnO₂ films were related to the deposition temperature. The films grown at 350°C had a low electrical resistivity of 3.6×10-3 Ω-cm and were, therefore, used for the nanotube-array sensors. The carrier concentration and mobility of the SnO₂ films were characterized by Ecopia HMS-3000 Hall-effect measurement system and were 1.1×1020 cm-3 and 16 cm3/V-s, respectively. The electrical resistance of SnO₂ film and nanotube-array sensors in air and in a 5% H₂-95% N₂ mixture gas was monitored by Pico text M3510A 6 1/2 Digits Multimeter. It was found that, at 200 °C, the 30-nm-wall SnO₂ nanotube-array sensor performs the highest responsivity to 5% H₂, followed by the 30-nm SnO₂ film sensor, the 60-nm SnO₂ film sensor, and the 60-nm-wall SnO₂ nanotube-array sensor. However, at temperatures below 100°C, all the samples were insensitive to the 5% H₂ gas. Further investigation on the sensors with thinner SnO₂ is necessary for improving the sensing ability at temperatures below 100 °C.Keywords: atomic layer deposition, nanotube arrays, gas sensor, tin dioxide
Procedia PDF Downloads 241130 Connecting the Dots: Bridging Academia and National Community Partnerships When Delivering Healthy Relationships Programming
Authors: Nicole Vlasman, Karamjeet Dhillon
Abstract:
Over the past four years, the Healthy Relationships Program has been delivered in community organizations and schools across Canada. More than 240 groups have been facilitated in collaboration with 33 organizations. As a result, 2157 youth have been engaged in the programming. The purpose and scope of the Healthy Relationships Program are to offer sustainable, evidence-based skills through small group implementation to prevent violence and promote positive, healthy relationships in youth. The program development has included extensive networking at regional and national levels. The Healthy Relationships Program is currently being implemented, adapted, and researched within the Resilience and Inclusion through Strengthening and Enhancing Relationships (RISE-R) project. Alongside the project’s research objectives, the RISE-R team has worked to virtually share the ongoing findings of the project through a slow ontology approach. Slow ontology is a practice integrated into project systems and structures whereby slowing the pace and volume of outputs offers creative opportunities. Creative production reveals different layers of success and complements the project, the building blocks for sustainability. As a result of integrating a slow ontology approach, the RISE-R team has developed a Geographic Information System (GIS) that documents local landscapes through a Story Map feature, and more specifically, video installations. Video installations capture the cartography of space and place within the context of singular diverse community spaces (case studies). By documenting spaces via human connections, the project captures narratives, which further enhance the voices and faces of the community within the larger project scope. This GIS project aims to create a visual and interactive flow of information that complements the project's mixed-method research approach. Conclusively, creative project development in the form of a geographic information system can provide learning and engagement opportunities at many levels (i.e., within community organizations and educational spaces or with the general public). In each of these disconnected spaces, fragmented stories are connected through a visual display of project outputs. A slow ontology practice within the context of the RISE-R project documents activities on the fringes and within internal structures; primarily through documenting project successes as further contributions to the Centre for School Mental Health framework (philosophy, recruitment techniques, allocation of resources and time, and a shared commitment to evidence-based products).Keywords: community programming, geographic information system, project development, project management, qualitative, slow ontology
Procedia PDF Downloads 155129 Re-Orienting Fashion: Fashionable Modern Muslim Women beyond Western Modernity
Authors: Amany Abdelrazek
Abstract:
Fashion is considered the main feature of modern and postmodern capitalist and consumerist society. Consumer historians maintain that fashion, namely, a sector of people embracing a prevailing clothing style for a short period, started during the Middle Ages but gained popularity later. It symbolised the transition from a medieval society with its solid fixed religious values into a modern society with its secular consumer dynamic culture. Renaissance society was a modern secular society concerning its preoccupation with daily life and changing circumstances. Yet, the late 18th-century industrial revolution revolutionised thought and ideology in Europe. The Industrial Revolution reinforced the Western belief in rationality and strengthened the position of science. In such a rational Western society, modernity, with its new ideas, came to challenge the whole idea of old fixed norms, reflecting the modern secular, rational culture and renouncing the medieval pious consumer. In modern society, supported by the industrial revolution and mass production, fashion encouraged broader sectors of society to integrate into fashion reserved for the aristocracy and royal courts. Moreover, the fashion project emphasizes the human body and its beauty, contradicting Judeo-Christian culture, which tends to abhor and criticize interest in sensuality and hedonism. In mainstream Western discourse, fashionable dress differentiates between emancipated stylish consumerist secular modern female and the assumed oppressed traditional modest religious female. Opposing this discourse, I look at the controversy over what has been called "Islamic fashion" that started during the 1980s and continued to gain popularity in contemporary Egyptian society. I discuss the challenges of being a fashionable and Muslim practicing female in light of two prominent models for female "Islamic fashion" in postcolonial Egypt; Jasmin Mohshen, the first hijabi model in Egypt and Manal Rostom, the first Muslim woman to represent the Nike campaign in the Middle East. The research employs fashion and postcolonial theories to rethink current Muslim women's position on women's emancipation, Western modernity and practising faith in postcolonial Egypt. The paper argues that Muslim women's current innovative and fashionable dress can work as a counter-discourse to the Orientalist and exclusive representation of non-Western Muslim culture as an inherently inert timeless culture. Furthermore, "Islamic" fashionable dress as an aesthetic medium for expressing ideas and convictions in contemporary Egypt interrogates the claim of universal secular modernity and Western fashion theorists' reluctance to consider Islamic fashion as fashion.Keywords: fashion, muslim women, modernity, secularism
Procedia PDF Downloads 129128 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 186127 Assessing the Geothermal Parameters by Integrating Geophysical and Geospatial Techniques at Siwa Oasis, Western Desert, Egypt
Authors: Eman Ghoneim, Amr S. Fahil
Abstract:
Many regions in Egypt are facing a reduction in crop productivity due to environmental degradation. One factor of crop deterioration includes the unsustainable drainage of surface water, leading to salinized soil conditions. Egypt has exerted time and effort to identify solutions to mitigate the surface water drawdown problem and its resulting effects by exploring renewable and sustainable sources of energy. Siwa Oasis represents one of the most favorable regions in Egypt for geothermal exploitation since it hosts an evident cluster of superficial thermal springs. Some of these hot springs are characterized by high surface temperatures and bottom hole temperatures (BHT) ranging between 20°C to 40 °C and 21 °C to 121.7°C, respectively. The depth to the Precambrian basement rock is commonly greater than 440 m, ranging from 440 m to 4724.4 m. It is this feature that makes the locality of Siwa Oasis sufficient for industrial processes and geothermal power production. In this study, BHT data from 27 deep oil wells were processed by applying the widely used Horner and Gulf of Mexico correction methods to obtain formation temperatures. BHT, commonly used in geothermal studies, remains the most abundant and readily available data source for subsurface temperature information. Outcomes of the present work indicated a geothermal gradient ranging from 18 to 42 °C/km, a heat flow ranging from 24.7 to 111.3 m.W.k⁻¹, and a thermal conductivity of 1.3–2.65 W.m⁻¹.k⁻¹. Remote sensing thermal infrared, topographic, geologic, and geothermal data were utilized to provide geothermal potential maps for the Siwa Oasis. Important physiographic variables (including surface elevation, lineament density, drainage density), geological and geophysical parameters (including land surface temperature, depth to basement, bottom hole temperature, magnetic, geothermal gradient, heat flow, thermal conductivity, and main rock units) were incorporated into GIS to produce a geothermal potential map (GTP) for the Siwa Oasis region. The model revealed that both the northeastern and southeastern sections of the study region are of high geothermal potential. The present work showed that combining bottom-hole temperature measurements and remote sensing data with the selected geospatial methodologies is a useful tool for geothermal prospecting in geologically and tectonically comparable settings in Egypt and East Africa. This work has implications for identifying sustainable resources needed to support food production and renewable energy resources.Keywords: BHT, geothermal potential map, geothermal gradient, heat flow, thermal conductivity, satellite imagery, GIS
Procedia PDF Downloads 118126 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning
Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi
Abstract:
Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.Keywords: agriculture, computer vision, data science, geospatial technology
Procedia PDF Downloads 136125 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography
Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner
Abstract:
Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.Keywords: CBCT, C-arm, reconstruction, trajectory optimization
Procedia PDF Downloads 131124 Spectroscopy and Electron Microscopy for the Characterization of CdSxSe1-x Quantum Dots in a Glass Matrix
Authors: C. Fornacelli, P. Colomban, E. Mugnaioli, I. Memmi Turbanti
Abstract:
When semiconductor particles are reduced in scale to nanometer dimension, their optical and electro-optical properties strongly differ from those of bulk crystals of the same composition. Since sampling is often not allowed concerning cultural heritage artefacts, the potentialities of two non-invasive techniques, such as Raman and Fiber Optic Reflectance Spectroscopy (FORS), have been investigated and the results of the analysis on some original glasses of different colours (from yellow to orange and deep red) and periods (from the second decade of the 20th century to present days) are reported in the present study. In order to evaluate the potentialities of the application of non-invasive techniques to the investigation of the structure and distribution of nanoparticles dispersed in a glass matrix, Scanning Electron Microscopy (SEM) and energy-disperse spectroscopy (EDS) mapping, together with Transmission Electron Microscopy (TEM) and Electron Diffraction Tomography (EDT) have also been used. Raman spectroscopy allows a fast and non-destructive measure of the quantum dots composition and size, thanks to the evaluation of the frequencies and the broadening/asymmetry of the LO phonons bands, respectively, though the important role of the compressive strain arising from the glass matrix and the possible diffusion of zinc from the matrix to the nanocrystals should be taken into account when considering the optical-phonons frequency values. The incorporation of Zn has been assumed by an upward shifting of the LO band related to the most abundant anion (S or Se), while the role of the surface phonons as well as the confinement-induced scattering by phonons with a non-zero wavevectors on the Raman peaks broadening has been verified. The optical band gap varies from 2.42 eV (pure CdS) to 1.70 eV (CdSe). For the compositional range between 0.5≤x≤0.2, the presence of two absorption edges has been related to the contribution of both pure CdS and the CdSxSe1-x solid solution; this particular feature is probably due to the presence of unaltered cubic zinc blende structures of CdS that is not taking part to the formation of the solid solution occurring only between hexagonal CdS and CdSe. Moreover, the band edge tailing originating from the disorder due to the formation of weak bonds and characterized by the Urbach edge energy has been studied and, together with the FWHM of the Raman signal, has been assumed as a good parameter to evaluate the degree of topological disorder. SEM-EDS mapping showed a peculiar distribution of the major constituents of the glass matrix (fluxes and stabilizers), especially concerning those samples where a layered structure has been assumed thanks to the spectroscopic study. Finally, TEM-EDS and EDT were used to get high-resolution information about nanocrystals (NCs) and heterogeneous glass layers. The presence of ZnO NCs (< 4 nm) dispersed in the matrix has been verified for most of the samples, while, for those samples where a disorder due to a more complex distribution of the size and/or composition of the NCs has been assumed, the TEM clearly verified most of the assumption made by the spectroscopic techniques.Keywords: CdSxSe1-x, EDT, glass, spectroscopy, TEM-EDS
Procedia PDF Downloads 298123 Multifunctional Janus Microbots for Intracellular Delivery of Therapeutic Agents
Authors: Shilpee Jain, Sachin Latiyan, Kaushik Suneet
Abstract:
Unlike traditional robots, medical microbots are not only smaller in size, but they also possess various unique properties, for example, biocompatibility, stability in the biological fluids, navigation opposite to the bloodstream, wireless control over locomotion, etc. The idea behind their usage in the medical field was to build a minimally invasive method for addressing the post-operative complications, including longer recovery time, infection eruption and pain. Herein, the present study demonstrates the fabrication of dual nature magneto-conducting Fe3O4 magnetic nanoparticles (MNPs) and SU8 derived carbon-based Janus microbots for the efficient intracellular delivery of biomolecules. The low aspect ratio with feature size 2-5 μm microbots were fabricated by using a photolithography technique. These microbots were pyrolyzed at 900°C, which converts SU8 into amorphous carbon. The pyrolyzed microbots have dual properties, i.e., the half part is magneto-conducting and another half is only conducting for sufficing the therapeutic payloads efficiently with the application of external electric/magnetic field stimulations. For the efficient intracellular delivery of the microbots, the size and aspect ratio plays a significant role. However, on a smaller scale, the proper control over movement is difficult to achieve. The dual nature of Janus microbots allowed to control its maneuverability in the complex fluids using external electric as well as the magnetic field. Interestingly, Janus microbots move faster with the application of an external electric field (44 µm/s) as compared to the magnetic field (18 µm/s) application. Furthermore, these Janus microbots exhibit auto-fluorescence behavior that will help to track their pathway during navigation. Typically, the use of MNPs in the microdevices enhances the tendency to agglomerate. However, the incorporation of Fe₃O₄ MNPs in the pyrolyzed carbon reduces the chances of agglomeration of the microbots. The biocompatibility of the medical microbots, which is the essential property of any biosystems, was determined in vitro using HeLa cells. The microbots were found to compatible with HeLa cells. Additionally, the intracellular uptake of microbots was higher in the presence of an external electric field as compared to without electric field stimulation. In summary, the cytocompatible Janus microbots were fabricated successfully. They are stable in the biological fluids, wireless controllable navigation with the help of a few Guess external magnetic fields, their movement can be tracked because of autofluorescence behavior, they are less susceptible to agglomeration and higher cellular uptake could be achieved with the application of the external electric field. Thus, these carriers could offer a versatile platform to suffice the therapeutic payloads under wireless actuation.Keywords: amorphous carbon, electric/magnetic stimulations, Janus microbots, magnetic nanoparticles, minimally invasive procedures
Procedia PDF Downloads 123122 Blockchain for the Monitoring and Reporting of Carbon Emission Trading: A Case Study on Its Possible Implementation in the Danish Energy Industry
Authors: Nkechi V. Osuji
Abstract:
The use of blockchain to address the issue of climate change is increasingly a discourse among countries, industries, and stakeholders. For a long time, the European Union (EU) has been combating the issue of climate action in industries through sustainability programs. One of such programs is the EU monitoring reporting and verification (MRV) program of the EU ETS. However, the system has some key challenges and areas for improvement, which makes it inefficient. The main objective of the research is to look at how blockchain can be used to improve the inefficiency of the EU ETS program for the Danish energy industry with a focus on its monitoring and reporting framework. Applying empirical data from 13 semi-structured expert interviews, three case studies, and literature reviews, three outcomes are presented in the study. The first is on the current conditions and challenges of monitoring and reporting CO₂ emission trading. The second is putting into consideration if blockchain is the right fit to solve these challenges and how. The third stage looks at the factors that might affect the implementation of such a system and provides recommendations to mitigate these challenges. The first stage of the findings reveals that the monitoring and reporting of CO₂ emissions is a mandatory requirement by law for all energy operators under the EU ETS program. However, most energy operators are non-compliant with the program in reality, which creates a gap and causes challenges in the monitoring and reporting of CO₂ emission trading. Other challenges the study found out are the lack of transparency, lack of standardization in CO₂ accounting, and the issue of double-counting in the current system. The second stage of the research was guided by three case studies and requirement engineering (RE) to explore these identified challenges and if blockchain is the right fit to address them. This stage of the research addressed the main research question: how can blockchain be used for monitoring and reporting CO₂ emission trading in the energy industry. Through analysis of the study data, the researcher developed a conceptual private permissioned Hyperledger blockchain and elucidated on how it can address the identified challenges. Particularly, the smart contract of blockchain was highlighted as a key feature. This is because of its ability to automate, be immutable, and digitally enforce negotiations without a middleman. These characteristics are unique in solving the issue of compliance, transparency, standardization, and double counting identified. The third stage of the research presents technological constraints and a high level of stakeholder collaboration as major factors that might affect the implementation of the proposed system. The proposed conceptual model requires high-level integration with other technologies such as the Internet of Things (IoT) and machine learning. Therefore, the study encourages future research in these areas. This is because blockchain is continually evolving its technology capabilities. As such, it remains a topic of interest in research and development for addressing climate change. Such a study is a good contribution to creating sustainable practices to solve the global climate issue.Keywords: blockchain, carbon emission trading, European Union emission trading system, monitoring and reporting
Procedia PDF Downloads 125121 Consumer Behavior and Attitudes of Green Advertising: A Collaborative Study with Three Companies to Educate Consumers
Authors: Mokhlisur Rahman
Abstract:
Consumers' understanding of the products depends on what levels of information the advertisement contains. Consumers' attitudes vary widely depending on factors such as their level of environmental awareness, their perception of the company's motives, and the perceived effectiveness of the advertising campaign. Considering the growing eco-consciousness among consumers and their concern for the environment, strategies for green advertising have become equally significant for companies to attract new consumers. It is important to understand consumers' habits of purchasing, knowledge, and attitudes regarding eco-friendly products depending on promotion because of the limitless options of the products in the market. Additionally, encouraging consumers to buy sustainable products requires a platform that can message the world that being a stakeholder in sustainability is possible if consumers show eco-friendly behavior on a larger scale. Social media platforms provide an excellent atmosphere to promote companies' sustainable efforts to be connected engagingly with their potential consumers. The unique strategies of green advertising use techniques to carry information and rewards for the consumers. This study aims to understand the consumer behavior and effectiveness of green advertising by experimenting in collaboration with three companies in promoting their eco-friendly products using green designs on the products. The experiment uses three sustainable personalized offerings, Nike shoes, H&M t-shirts, and Patagonia school bags. The experiment uses a pretest and posttest design. 300 randomly selected participants take part in this experiment and survey through Facebook, Twitter, and Instagram. Nike, H&M, and Patagonia share the post of the experiment on their social media homepages with a video advertisement for the three products. The consumers participate in a pre-experiment online survey before making a purchase decision to assess their attitudes and behavior toward eco-friendly products. The audio-only feature explains the product's information, like their use of recycled materials, their manufacturing methods, sustainable packaging, and their impact on the environment during the purchase while the consumer watches the product video. After making a purchase, consumers take a post-experiment survey to know their perception and behavior toward eco-friendly products. For the data analysis, descriptive statistical tools mean, standard deviation, and frequencies measure the pre- and post-experiment survey data. The inferential statistical tool paired sample t-test measures the difference in consumers' behavior and attitudes between pre-purchase and post-experiment survey results. This experiment provides consumers ample time to consider many aspects rather than impulses. This research provides valuable insights into how companies can adopt sustainable and eco-friendly products. The result set a target for the companies to achieve a sustainable production goal that ultimately supports companies' profit-making and promotes consumers' well-being. This empowers consumers to make informed choices about the products they purchase and support their companies of interest.Keywords: green-advertising, sustainability, consumer-behavior, social media
Procedia PDF Downloads 85120 Structure and Properties of Intermetallic NiAl-Based Coatings Produced by Magnetron Sputtering Technique
Authors: Tatiana S. Ogneva
Abstract:
Aluminum and nickel-based intermetallic compounds have attracted the attention of scientific community as promising materials for heat-resistant and wear-resistant coatings in such manufacturing areas as microelectronics, aircraft and rocket building and chemical industries. Magnetron sputtering makes possible to coat materials without formation of liquid phase and improves the mechanical and functional properties of nickel aluminides due to the possibility of nanoscale structure formation. The purpose of the study is the investigation of structure and properties of intermetallic coatings produced by magnetron sputtering technique. The feature of this work is the using of composite targets for sputtering, which were consisted of two semicircular sectors of cp-Ni and cp-Al. Plates of alumina, silicon, titanium and steel alloys were used as substrates. To estimate sputtering conditions on structure of intermetallic coatings, a series of samples were produced and studied in detail using scanning and transition electron microcopy and X-Ray diffraction. Besides, nanohardness and scratching tests were carried out. The varying parameters were the distance from the substrate to the target, the duration and the power of the sputtering. The thickness of the obtained intermetallic coatings varied from 0.05 to 0.5 mm depending on the sputtering conditions. The X-ray diffraction data indicated that the formation of intermetallic compounds occurred after sputtering without additional heat treatment. Sputtering at a distance not closer than 120 mm led to the formation of NiAl phase. Increase in the power of magnetron from 300 to 900 W promoted the increase of heterogeneity of the phase composition and the appearance of intermetallic phases NiAl, Ni₂Al₃, NiAl₃, and Al under the aluminum side, and NiAl, Ni₃Al, and Ni under the nickel side of the target. A similar trend is observed with increasing the distance of sputtering from 100 to 60 mm. The change in the phase composition correlates with the changing of the atomic composition of the coatings. Scanning electron microscopy revealed that the coatings have a nanoscale grain structure. In this case, the substrate material and the distance from the substrate to the magnetron have a significant effect on the structure formation process. The size of nanograins differs from 10 to 83 nm and depends not only on the sputtering modes but also on material of a substrate. Nanostructure of the material influences the level of mechanical properties. The highest level of nanohardness of the coatings deposited during 30 minutes on metallic substrates at a distance of 100 mm reached 12 GPa. It was shown that nanohardness depends on the grain size of the intermetallic compound. Scratching tests of the coatings showed a high level of adhesion of the coating to substrate without any delamination and cracking. The results of the study showed that magnetron sputtering of composite targets consisting of nickel and aluminum semicircles makes it possible to form intermetallic coatings with good mechanical properties directly in the process of sputtering without additional heat treatment.Keywords: intermetallic coatings, magnetron sputtering, mechanical properties, structure
Procedia PDF Downloads 118119 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking
Authors: Noga Bregman
Abstract:
Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves
Procedia PDF Downloads 49118 Single Centre Retrospective Analysis of MR Imaging in Placenta Accreta Spectrum Disorder with Histopathological Correlation
Authors: Frank Dorrian, Aniket Adhikari
Abstract:
The placenta accreta spectrum (PAS), which includes placenta accreta, increta, and percreta, is characterized by the abnormal implantation of placental chorionic villi beyond the decidua basalis. Key risk factors include placenta previa, prior cesarean sections, advanced maternal age, uterine surgeries, multiparity, pelvic radiation, and in vitro fertilization (IVF). The incidence of PAS has increased tenfold over the past 50 years, largely due to rising cesarean rates. PAS is associated with significant peripartum and postpartum hemorrhage. Magnetic resonance imaging (MRI) and ultrasound assist in the evaluation of PAS, enabling a multidisciplinary approach to mitigate morbidity and mortality. This study retrospectively analyzed PAS cases at Royal Prince Alfred Hospital, Sydney, Australia. Using the SAR-ESUR joint consensus statement, seven imaging signs were reassessed for their sensitivity and specificity in predicting PAS, with histopathological correlation. The standardized MRI protocols for PAS at the institution were also reviewed. Data were collected from the picture archiving and communication system (PACS) records from 2010 to July 2024, focusing on cases where MR imaging and confirmed histopathology or operative notes were available. This single-center, observational study provides insights into the reliability of MRI for PAS detection and the optimization of imaging protocols for accurate diagnosis. The findings demonstrate that intraplacental dark bands serve as highly sensitive markers for diagnosing PAS, achieving sensitivities of 88.9%, 85.7%, and 100% for placenta accreta, increta, and percreta, respectively, with a combined specificity of 42.9%. Sensitivity for abnormal vascularization was lower (33.3%, 28.6%, and 50%), with a specificity of 57.1%. The placenta bulge exhibited sensitivities of 55.5%, 57.1%, and 100%, with a specificity of 57.1%. Loss of the T2 hypointense interface had sensitivities of 66.6%, 85.7%, and 100%, with 42.9% specificity. Myometrial thinning showed high sensitivity across PAS conditions (88.9%, 71.4%, and 100%) and a specificity of 57.1%. Bladder wall thinning was sensitive only for placenta percreta (50%) but had a specificity of 100%. Focal exophytic mass displayed variable sensitivity (22.9%, 42.9%, and 100%) with a specificity of 85.7%. These results highlight the diagnostic variability among markers, with intraplacental dark bands and myometrial thinning being useful in detecting abnormal placentation, though they lack high specificity. The literature and the results of our study highlight that while no single feature can definitively diagnose PAS, the presence of multiple features -especially when combined with elevated clinical risk- significantly increases the likelihood of an underlying PAS. A thorough understanding of the range of MRI findings associated with PAS, along with awareness of the clinical significance of each sign, helps the radiologist more accurately diagnose the condition and assist in surgical planning, ultimately improving patient care.Keywords: placenta, accreta, spectrum, MRI
Procedia PDF Downloads 4117 Leuco Dye-Based Thermochromic Systems for Application in Temperature Sensing
Authors: Magdalena Wilk-Kozubek, Magdalena Rowińska, Krzysztof Rola, Joanna Cybińska
Abstract:
Leuco dye-based thermochromic systems are classified as intelligent materials because they exhibit thermally induced color changes. Thanks to this feature, they are mainly used as temperature sensors in many industrial sectors. For example, placing a thermochromic material on a chemical reactor may warn about exceeding the maximum permitted temperature for a chemical process. Usually two components, a color former and a developer are needed to produce a system with irreversible color change. The color former is an electron donating (proton accepting) compound such as fluoran leuco dye. The developer is an electron accepting (proton donating) compound such as organic carboxylic acid. When the developer melts, the color former - developer complex is created and the termochromic system becomes colored. Typically, the melting point of the applied developer determines the temperature at which the color change occurs. When the lactone ring of the color former is closed, then the dye is in its colorless state. The ring opening, induced by the addition of a proton, causes the dye to turn into its colored state. Since the color former and the developer are often solid, they can be incorporated into polymer films to facilitate their practical use in industry. The objective of this research was to fabricate a leuco dye-based termochromic system that will irreversibly change color after reaching the temperature of 100°C. For this purpose, benzofluoran leuco dye (as color former) and phenoxyacetic acid (as developer with a melting point of 100°C) were introduced into the polymer films during the drop casting process. The film preparation process was optimized in order to obtain thin films with appropriate properties such as transparency, flexibility and homogeneity. Among the optimized factors were the concentration of benzofluoran leuco dye and phenoxyacetic acid, the type, average molecular weight and concentration of the polymer, and the type and concentration of the surfactant. The selected films, containing benzofluoran leuco dye and phenoxyacetic acid, were combined by mild heat treatment. Structural characterization of single and combined films was carried out by FTIR spectroscopy, morphological analysis was performed by optical microscopy and SEM, phase transitions were examined by DSC, color changes were investigated by digital photography and UV-Vis spectroscopy, while emission changes were studied by photoluminescence spectroscopy. The resulting thermochromic system is colorless at room temperature, but after reaching 100°C the developer melts and it turns irreversibly pink. Therefore, it could be used as an additional sensor to warn against boiling of water in power plants using water cooling. Currently used electronic temperature indicators are prone to faults and unwanted third-party actions. The sensor constructed in this work is transparent, thanks to which it can be unnoticed by an outsider and constitute a reliable reference for the person responsible for the apparatus.Keywords: color developer, leuco dye, thin film, thermochromism
Procedia PDF Downloads 98116 Content Monetization as a Mark of Media Economy Quality
Authors: Bela Lebedeva
Abstract:
Characteristics of the Web as a channel of information dissemination - accessibility and openness, interactivity and multimedia news - become wider and cover the audience quickly, positively affecting the perception of content, but blur out the understanding of the journalistic work. As a result audience and advertisers continue migrating to the Internet. Moreover, online targeting allows monetizing not only the audience (as customarily given to traditional media) but also the content and traffic more accurately. While the users identify themselves with the qualitative characteristics of the new market, its actors are formed. Conflict of interests is laid in the base of the economy of their relations, the problem of traffic tax as an example. Meanwhile, content monetization actualizes fiscal interest of the state too. The balance of supply and demand is often violated due to the political risks, particularly in terms of state capitalism, populism and authoritarian methods of governance such social institutions as the media. A unique example of access to journalistic material, limited by monetization of content is a television channel Dozhd' (Rain) in Russian web space. Its liberal-minded audience has a better possibility for discussion. However, the channel could have been much more successful in terms of unlimited free speech. Avoiding state pressure and censorship its management has decided to save at least online performance and monetizing all of the content for the core audience. The study Methodology was primarily based on the analysis of journalistic content, on the qualitative and quantitative analysis of the audience. Reconstructing main events and relationships of actors on the market for the last six years researcher has reached some conclusions. First, under the condition of content monetization the capitalization of its quality will always strive to quality characteristics of user, thereby identifying him. Vice versa, the user's demand generates high-quality journalism. The second conclusion follows the previous one. The growth of technology, information noise, new political challenges, the economy volatility and the cultural paradigm change – all these factors form the content paying model for an individual user. This model defines him as a beneficiary of specific knowledge and indicates the constant balance of supply and demand other conditions being equal. As a result, a new economic quality of information is created. This feature is an indicator of the market as a self-regulated system. Monetized information quality is less popular than that of the Public Broadcasting Service, but this audience is able to make decisions. These very users keep the niche sectors which have more potential of technology development, including the content monetization ways. The third point of the study allows develop it in the discourse of media space liberalization. This cultural phenomenon may open opportunities for the development of social and economic relations architecture both locally and regionally.Keywords: content monetization, state capitalism, media liberalization, media economy, information quality
Procedia PDF Downloads 246115 Poultry Manure and Its Derived Biochar as a Soil Amendment for Newly Reclaimed Sandy Soils under Arid and Semi-Arid Conditions
Authors: W. S. Mohamed, A. A. Hammam
Abstract:
Sandy soils under arid and semi-arid conditions are characterized by poor physical and biochemical properties such as low water retention, rapid organic matter decomposition, low nutrients use efficiency, and limited crop productivity. Addition of organic amendments is crucial to develop soil properties and consequently enhance nutrients use efficiency and lessen organic carbon decomposition. Two years field experiments were developed to investigate the feasibility of using poultry manure and its derived biochar integrated with different levels of N fertilizer as a soil amendment for newly reclaimed sandy soils in Western Desert of El-Minia Governorate, Egypt. Results of this research revealed that poultry manure and its derived biochar addition induced pronounced effects on soil moisture content at saturation point, field capacity (FC) and consequently available water. Data showed that application of poultry manure (PM) or PM-derived biochar (PMB) in combination with inorganic N levels had caused significant changes on a range of the investigated sandy soil biochemical properties including pH, EC, mineral N, dissolved organic carbon (DOC), dissolved organic N (DON) and quotient DOC/DON. Overall, the impact of PMB on soil physical properties was detected to be superior than the impact of PM, regardless the inorganic N levels. In addition, the obtained results showed that PM and PM application had the capacity to stimulate vigorous growth, nutritional status, production levels of wheat and sorghum, and to increase soil organic matter content and N uptake and recovery compared to control. By contrast, comparing between PM and PMB at different levels of inorganic N, the obtained results showed higher relative increases in both grain and straw yields of wheat in plots treated with PM than in those treated with PMB. The interesting feature of this research is that the biochar derived from PM increased treated sandy soil organic carbon (SOC) 1.75 times more than soil treated with PM itself at the end of cropping seasons albeit double-applied amount of PM. This was attributed to the higher carbon stability of biochar treated sandy soils increasing soil persistence for carbon decomposition in comparison with PM labile carbon. It could be concluded that organic manures applied to sandy soils under arid and semi-arid conditions are subjected to high decomposition and mineralization rates through crop seasons. Biochar derived from organic wastes considers as a source of stable carbon and could be very hopeful choice for substituting easily decomposable organic manures under arid conditions. Therefore, sustainable agriculture and productivity in newly reclaimed sandy soils desire one high rate addition of biochar derived from organic manures instead of frequent addition of such organic amendments.Keywords: biochar, dissolved organic carbon, N-uptake, poultry, sandy soil
Procedia PDF Downloads 143114 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator
Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić
Abstract:
Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.Keywords: CT simulator, radiotherapy, quality control, QA programme
Procedia PDF Downloads 529