Search results for: computer aided detection
835 Computer Simulation to Investigate Magnetic and Wave-Absorbing Properties of Iron Nanoparticles
Authors: Chuan-Wen Liu, Min-Hsien Liu, Chung-Chieh Tai, Bing-Cheng Kuo, Cheng-Lung Chen, Huazhen Shen
Abstract:
A recent surge in research on magnetic radar absorbing materials (RAMs) has presented researchers with new opportunities and challenges. This study was performed to gain a better understanding of the wave-absorbing phenomenon of magnetic RAMs. First, we hypothesized that the absorbing phenomenon is dependent on the particle shape. Using the Material Studio program and the micro-dot magnetic dipoles (MDMD) method, we obtained results from magnetic RAMs to support this hypothesis. The total MDMD energy of disk-like iron particles was greater than that of spherical iron particles. In addition, the particulate aggregation phenomenon decreases the wave-absorbance, according to both experiments and computational data. To conclude, this study may be of importance in terms of explaining the wave- absorbing characteristic of magnetic RAMs. Combining molecular dynamics simulation results and the theory of magnetization of magnetic dots, we investigated the magnetic properties of iron materials with different particle shapes and degrees of aggregation under external magnetic fields. The MDMD of the materials under magnetic fields of various strengths were simulated. Our results suggested that disk-like iron particles had a better magnetization than spherical iron particles. This result could be correlated with the magnetic wave- absorbing property of iron material.Keywords: wave-absorbing property, magnetic material, micro-dot magnetic dipole, particulate aggregation
Procedia PDF Downloads 489834 Humans Trust Building in Robots with the Help of Explanations
Authors: Misbah Javaid, Vladimir Estivill-Castro, Rene Hexel
Abstract:
The field of robotics is advancing rapidly to the point where robots have become an integral part of the modern society. These robots collaborate and contribute productively with humans and compensate some shortcomings from human abilities and complement them with their skills. Effective teamwork of humans and robots demands to investigate the critical issue of trust. The field of human-computer interaction (HCI) has already examined trust humans place in technical systems mostly on issues like reliability and accuracy of performance. Early work in the area of expert systems suggested that automatic generation of explanations improved trust and acceptability of these systems. In this work, we augmented a robot with the user-invoked explanation generation proficiency. To measure explanations effect on human’s level of trust, we collected subjective survey measures and behavioral data in a human-robot team task into an interactive, adversarial and partial information environment. The results showed that with the explanation capability humans not only understand and recognize robot as an expert team partner. But, it was also observed that human's learning and human-robot team performance also significantly improved because of the meaningful interaction with the robot in the human-robot team. Moreover, by observing distinctive outcomes, we expect our research outcomes will also provide insights into further improvement of human-robot trustworthy relationships.Keywords: explanation interface, adversaries, partial observability, trust building
Procedia PDF Downloads 198833 Exergy Based Analysis of Parabolic Trough Collector Using Twisted-Tape Inserts
Authors: Atwari Rawani, Suresh Prasad Sharma, K. D. P. Singh
Abstract:
In this paper, an analytical investigation based on energy and exergy analysis of the parabolic trough collector (PTC) with alternate clockwise and counter-clockwise twisted tape inserts in the absorber tube has been presented. For fully developed flow under quasi-steady state conditions, energy equations have been developed in order to analyze the rise in fluid temperature, thermal efficiency, entropy generation and exergy efficiency. Also the effect of system and operating parameters on performance have been studied. A computer program, based on mathematical models is developed in C++ language to estimate the temperature rise of fluid for evaluation of performances under specified conditions. For numerical simulations four different twist ratio, x = 2,3,4,5 and mass flow rate 0.06 kg/s to 0.16 kg/s which cover the Reynolds number range of 3000 - 9000 is considered. This study shows that twisted tape inserts when used shows great promise for enhancing the performance of PTC. Results show that for x=1, Nusselt number/heat transfer coefficient is found to be 3.528 and 3.008 times over plain absorber of PTC at mass flow rate of 0.06 kg/s and 0.16 kg/s respectively; while corresponding enhancement in thermal efficiency is 12.57% and 5.065% respectively. Also the exergy efficiency has been found to be 10.61% and 10.97% and enhancement factor is 1.135 and 1.048 for same set of conditions.Keywords: exergy efficiency, twisted tape ratio, turbulent flow, useful heat gain
Procedia PDF Downloads 171832 Development of a General Purpose Computer Programme Based on Differential Evolution Algorithm: An Application towards Predicting Elastic Properties of Pavement
Authors: Sai Sankalp Vemavarapu
Abstract:
This paper discusses the application of machine learning in the field of transportation engineering for predicting engineering properties of pavement more accurately and efficiently. Predicting the elastic properties aid us in assessing the current road conditions and taking appropriate measures to avoid any inconvenience to commuters. This improves the longevity and sustainability of the pavement layer while reducing its overall life-cycle cost. As an example, we have implemented differential evolution (DE) in the back-calculation of the elastic modulus of multi-layered pavement. The proposed DE global optimization back-calculation approach is integrated with a forward response model. This approach treats back-calculation as a global optimization problem where the cost function to be minimized is defined as the root mean square error in measured and computed deflections. The optimal solution which is elastic modulus, in this case, is searched for in the solution space by the DE algorithm. The best DE parameter combinations and the most optimum value is predicted so that the results are reproducible whenever the need arises. The algorithm’s performance in varied scenarios was analyzed by changing the input parameters. The prediction was well within the permissible error, establishing the supremacy of DE.Keywords: cost function, differential evolution, falling weight deflectometer, genetic algorithm, global optimization, metaheuristic algorithm, multilayered pavement, pavement condition assessment, pavement layer moduli back calculation
Procedia PDF Downloads 163831 A Framework Factors Influencing Accounting Information Systems Adoption Success
Authors: Manirath Wongsim
Abstract:
AIS plays an important role in business management, strategic and can provide assistance in all phases of decision making. Thus, many organisations needs to be seen as well adopting AIS, which is critical to a company in order to organise, manage and operate process in all sections. In order to implement AIS successfully, it is important to understand the underlying factors that influence the AIS adoption. Therefore, this research intends to study this perspective of factors influence and impact on AIS adoption’s success. The model has been designed to illustrate factors influences in AIS adoption. It also attempts to identify the critical success factors that organisations should focus on, to ensure the adoption on accounting process. This framework will be developed from case studies by collecting qualitative and quantitative data. Case study and survey methodology were adopted for this research. Case studies in two Thai- organisations were carried out. The results of the two main case studies suggested 9 factors that may have impact on in AIS adoption. Survey instrument was developed based on the findings from case studies. Two large-scale surveys were sent to selected members of Thailand Accountant, and Thailand Computer Society to further develop and test the research framework. The top three critical factors for ensuring AIS adoption were: top management commitment, steering committees, and Technical capability of AIS personnel. That is, it is now clear which factors impact in AIS adoption, and which of those factors are critical success factors for ensuring AIS adoption successesKeywords: accounting information system, accounting information systems adoption, and inflecting AIS adoption
Procedia PDF Downloads 397830 APP-Based Language Teaching Using Mobile Response System in the Classroom
Authors: Martha Wilson
Abstract:
With the peak of Computer-Assisted Language Learning slowly coming to pass and Mobile-Assisted Language Learning, at times, a bit lacking in the communicative department, we are now faced with a challenging question: How can we engage the interest of our digital native students and, most importantly, sustain it? As previously mentioned, our classrooms are now experiencing an influx of “digital natives” – people who have grown up using and having unlimited access to technology. While modernizing our curriculum and digitalizing our classrooms are necessary in order to accommodate this new learning style, it is a huge financial burden and a massive undertaking for language institutes. Instead, opting for a more compact, simple, yet multidimensional pedagogical tool may be the solution to the issue at hand. This paper aims to give a brief overview into an existing device referred to as Student Response Systems (SRS) and to expand on this notion to include a new prototype of response system that will be designed as a mobile application to eliminate the need for costly hardware and software. Additionally, an analysis into recent attempts by other institutes to develop the Mobile Response System (MRS) and customer reviews of the existing MRSs will be provided, as well as the lessons learned from those projects. Finally, while the new model of MRS is still in its infancy stage, this paper will discuss the implications of incorporating such an application as a tool to support and to enrich traditional techniques and also offer practical classroom applications with the existing response systems that are immediately available on the market.Keywords: app, clickers, mobile app, mobile response system, student response system
Procedia PDF Downloads 370829 Effect of Nanoscale Bismuth Oxide on Radiation Shielding and Interaction Characteristics of Polyvinyl Alcohol-Based Polymer for Medical Apron Design
Authors: E. O. Echeweozo
Abstract:
This study evaluated radiation shielding and interaction characteristics of polyvinyl alcohol (PVA) polymer separately doped with 10% and 20% nanoscale Bi₂O₃, respectively, for medical apron design and shielding special electronic installations. Prepared samples were characterized by scanning electron microscopy (SEM) and energy dispersive spectrometry (EDS). The EDS results showed that Carbon (C), Oxygen (O), and bismuth (Bi) elements were the predominant elements present in the prepared samples. The SEM result displaced surface irregularities due to a special bonding matrix between PVA and Bi₂O₃. Mass attenuation coefficient (MAC), effective atomic number (Zeff), Half value layer (HVL), Mean free path (MFP), Fast neutron removal cross-section (R), Total Mass Stopping Power (TSP), and photon Range (R) of the prepared polymer composites (PV-1Bi and PV-2Bi) were evaluated with XCOM and PHITS computer programs. Results showed that the MAC of the prepared polymer samples was significantly higher than some recently developed composites at 0.662MeV and 1.25MeV gamma energy. Therefore, polyvinyl alcohol (PVA) polymer doped with Bi₂O₃ should be deployed in medical apron design and shielding special electronic installations where flexibility and high adhesion ability are crucial.Keywords: polyvinyl alcohol (PVA);, polymer composite, gamma-rays, charged particles
Procedia PDF Downloads 19828 Single Centre Retrospective Analysis of MR Imaging in Placenta Accreta Spectrum Disorder with Histopathological Correlation
Authors: Frank Dorrian, Aniket Adhikari
Abstract:
The placenta accreta spectrum (PAS), which includes placenta accreta, increta, and percreta, is characterized by the abnormal implantation of placental chorionic villi beyond the decidua basalis. Key risk factors include placenta previa, prior cesarean sections, advanced maternal age, uterine surgeries, multiparity, pelvic radiation, and in vitro fertilization (IVF). The incidence of PAS has increased tenfold over the past 50 years, largely due to rising cesarean rates. PAS is associated with significant peripartum and postpartum hemorrhage. Magnetic resonance imaging (MRI) and ultrasound assist in the evaluation of PAS, enabling a multidisciplinary approach to mitigate morbidity and mortality. This study retrospectively analyzed PAS cases at Royal Prince Alfred Hospital, Sydney, Australia. Using the SAR-ESUR joint consensus statement, seven imaging signs were reassessed for their sensitivity and specificity in predicting PAS, with histopathological correlation. The standardized MRI protocols for PAS at the institution were also reviewed. Data were collected from the picture archiving and communication system (PACS) records from 2010 to July 2024, focusing on cases where MR imaging and confirmed histopathology or operative notes were available. This single-center, observational study provides insights into the reliability of MRI for PAS detection and the optimization of imaging protocols for accurate diagnosis. The findings demonstrate that intraplacental dark bands serve as highly sensitive markers for diagnosing PAS, achieving sensitivities of 88.9%, 85.7%, and 100% for placenta accreta, increta, and percreta, respectively, with a combined specificity of 42.9%. Sensitivity for abnormal vascularization was lower (33.3%, 28.6%, and 50%), with a specificity of 57.1%. The placenta bulge exhibited sensitivities of 55.5%, 57.1%, and 100%, with a specificity of 57.1%. Loss of the T2 hypointense interface had sensitivities of 66.6%, 85.7%, and 100%, with 42.9% specificity. Myometrial thinning showed high sensitivity across PAS conditions (88.9%, 71.4%, and 100%) and a specificity of 57.1%. Bladder wall thinning was sensitive only for placenta percreta (50%) but had a specificity of 100%. Focal exophytic mass displayed variable sensitivity (22.9%, 42.9%, and 100%) with a specificity of 85.7%. These results highlight the diagnostic variability among markers, with intraplacental dark bands and myometrial thinning being useful in detecting abnormal placentation, though they lack high specificity. The literature and the results of our study highlight that while no single feature can definitively diagnose PAS, the presence of multiple features -especially when combined with elevated clinical risk- significantly increases the likelihood of an underlying PAS. A thorough understanding of the range of MRI findings associated with PAS, along with awareness of the clinical significance of each sign, helps the radiologist more accurately diagnose the condition and assist in surgical planning, ultimately improving patient care.Keywords: placenta, accreta, spectrum, MRI
Procedia PDF Downloads 4827 Numerical Investigation of Nanofluid Based Thermosyphon System
Authors: Kiran Kumar K., Ramesh Babu Bejjam, Atul Najan
Abstract:
A thermosyphon system is a heat transfer loop which operates on the basis of gravity and buoyancy forces. It guarantees a good reliability and low maintenance cost as it does not involve any mechanical pump. Therefore it can be used in many industrial applications such as refrigeration and air conditioning, electronic cooling, nuclear reactors, geothermal heat extraction, etc. But flow instabilities and loop configuration are the major problems in this system. Several previous researchers studied that stabilities can be suppressed by using nanofluids as loop fluid. In the present study a rectangular thermosyphon loop with end heat exchangers are considered for the study. This configuration is more appropriate for many practical applications such as solar water heater, geothermal heat extraction, etc. In the present work, steady-state analysis is carried out on thermosyphon loop with parallel flow coaxial heat exchangers at heat source and heat sink. In this loop nano fluid is considered as the loop fluid and water is considered as the external fluid in both hot and cold heat exchangers. For this analysis one-dimensional homogeneous model is developed. In this model, conservation equations like conservation of mass, momentum, energy are discretized using finite difference method. A computer code is written in MATLAB to simulate the flow in thermosyphon loop. A comparison in terms of heat transfer is made between water and nano fluid as working fluids in the loop.Keywords: heat exchanger, heat transfer, nanofluid, thermosyphon loop
Procedia PDF Downloads 476826 Basics of Gamma Ray Burst and Its Afterglow
Authors: Swapnil Kumar Singh
Abstract:
Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.Keywords: GRB, synchrotron, X-ray, isotropic energy
Procedia PDF Downloads 87825 Advanced Combinatorial Method for Solving Complex Fault Trees
Authors: José de Jesús Rivero Oliva, Jesús Salomón Llanes, Manuel Perdomo Ojeda, Antonio Torres Valle
Abstract:
Combinatorial explosion is a common problem to both predominant methods for solving fault trees: Minimal Cut Set (MCS) approach and Binary Decision Diagram (BDD). High memory consumption impedes the complete solution of very complex fault trees. Only approximated non-conservative solutions are possible in these cases using truncation or other simplification techniques. The paper proposes a method (CSolv+) for solving complex fault trees, without any possibility of combinatorial explosion. Each individual MCS is immediately discarded after its contribution to the basic events importance measures and the Top gate Upper Bound Probability (TUBP) has been accounted. An estimation of the Top gate Exact Probability (TEP) is also provided. Therefore, running in a computer cluster, CSolv+ will guarantee the complete solution of complex fault trees. It was successfully applied to 40 fault trees from the Aralia fault trees database, performing the evaluation of the top gate probability, the 1000 Significant MCSs (SMCS), and the Fussell-Vesely, RRW and RAW importance measures for all basic events. The high complexity fault tree nus9601 was solved with truncation probabilities from 10-²¹ to 10-²⁷ just to limit the execution time. The solution corresponding to 10-²⁷ evaluated 3.530.592.796 MCSs in 3 hours and 15 minutes.Keywords: system reliability analysis, probabilistic risk assessment, fault tree analysis, basic events importance measures
Procedia PDF Downloads 45824 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 137823 Analysis of Nonlinear Dynamic Systems Excited by Combined Colored and White Noise Excitations
Authors: Siu-Siu Guo, Qingxuan Shi
Abstract:
In this paper, single-degree-of-freedom (SDOF) systems to white noise and colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis.Keywords: filtered noise, narrow-banded noise, nonlinear dynamic, random vibration
Procedia PDF Downloads 224822 Biological Significance of Long Intergenic Noncoding RNA LINC00273 in Lung Cancer Cell Metastasis
Authors: Ipsita Biswas, Arnab Sarkar, Ashikur Rahaman, Gopeswar Mukherjee, Subhrangsu Chatterjee, Shamee Bhattacharjee, Deba Prasad Mandal
Abstract:
One of the major reasons for the high mortality rate of lung cancer is the substantial delays in disease detection at late metastatic stages. It is of utmost importance to understand the detailed molecular signaling and detect the molecular markers that can be used for the early diagnosis of cancer. Several studies explored the emerging roles of long noncoding RNAs (lncRNAs) in various cancers as well as lung cancer. A long non-coding RNA LINC00273 was recently discovered to promote cancer cell migration and invasion, and its positive correlation with the pathological stages of metastasis may prove it to be a potential target for inhibiting cancer cell metastasis. Comparing real-time expression of LINC00273 in various human clinical cancer tissue samples with normal tissue samples revealed significantly higher expression in cancer tissues. This long intergenic noncoding RNA was found to be highly expressed in human liver tumor-initiating cells, human gastric adenocarcinoma AGS cell line, as well as human non-small cell lung cancer A549 cell line. SiRNA and shRNA-induced knockdown of LINC00273 in both in vitro and in vivo nude mice significantly subsided AGS and A549 cancer cell migration and invasion. LINC00273 knockdown also reduced TGF-β induced SNAIL, SLUG, VIMENTIN, ZEB1 expression, and metastasis in A549 cells. Plenty of reports have suggested the role of microRNAs of the miR200 family in reversing epithelial to mesenchymal transition (EMT) by inhibiting ZEB transcription factors. In this study, hsa-miR-200a-3p was predicted via IntaRNA-Freiburg RNA tools to be a potential target of LINC00273 with a negative free binding energy of −8.793 kcal/mol, and this interaction was verified as a confirmed target of LINC00273 by RNA pulldown, real-time PCR and luciferase assay. Mechanistically, LINC00273 accelerated TGF-β induced EMT by sponging hsa-miR-200a-3p which in turn liberated ZEB1 and promoted prometastatic functions in A549 cells in vitro as verified by real-time PCR and western blotting. The similar expression patterns of these EMT regulatory pathway molecules, viz. LINC00273, hsa-miR-200a-3p, ZEB1 and TGF-β, were also detected in various clinical samples like breast cancer tissues, oral cancer tissues, lung cancer tissues, etc. Overall, this LINC00273 mediated EMT regulatory signaling can serve as a potential therapeutic target for the prevention of lung cancer metastasis.Keywords: epithelial to mesenchymal transition, long noncoding RNA, microRNA, non-small-cell lung carcinoma
Procedia PDF Downloads 154821 Preparation, Solid State Characterization of Etraverine Co-Crystals with Improved Solubility for the Treatment of Human Immunodeficiency Virus
Authors: B. S. Muddukrishna, Karthik Aithal, Aravind Pai
Abstract:
Introduction: Preparation of binary cocrystals of Etraverine (ETR) by using Tartaric Acid (TAR) as a conformer was the main focus of this study. Etravirine is a Class IV drug, as per the BCS classification system. Methods: Cocrystals were prepared by slow evaporation technique. A mixture of total 500mg of ETR: TAR was weighed in molar ratios of 1:1 (371.72mg of ETR and 128.27mg of TAR). Saturated solution of Etravirine was prepared in Acetone: Methanol (50:50) mixture in which tartaric acid is dissolved by sonication and then this solution was stirred using a magnetic stirrer until the solvent got evaporated. Shimadzu FTIR – 8300 system was used to acquire the FTIR spectra of the cocrystals prepared. Shimadzu thermal analyzer was used to achieve DSC measurements. X-ray diffractometer was used to obtain the X-ray powder diffraction pattern. Shake flask method was used to determine the equilibrium dynamic solubility of pure, physical mixture and cocrystals of ETR. USP buffer (pH 6.8) containing 1% of Tween 80 was used as the medium. The pure, physical mixture and the optimized cocrystal of ETR were accurately weighed sufficient to maintain the sink condition and were filled in hard gelatine capsules (size 4). Electrolab-Tablet Dissolution tester using basket apparatus at a rotational speed of 50 rpm and USP phosphate buffer (900 mL, pH = 6.8, 37 ˚C) + 1% Tween80 as a media, was used to carry out dissolution. Shimadzu LC-10 series chromatographic system was used to perform the analysis with PDA detector. An Hypersil BDS C18 (150mm ×4.6 mm ×5 µm) column was used for separation with mobile phase comprising of a mixture of ace¬tonitrile and phosphate buffer 20mM, pH 3.2 in the ratio 60:40 v/v. The flow rate was 1.0mL/min and column temperature was set to 30°C. The detection was carried out at 304 nm for ETR. Results and discussions: The cocrystals were subjected to various solid state characterization and the results confirmed the formation of cocrystals. The C=O stretching vibration (1741cm-1) in tartaric acid was disappeared in the cocrystal and the peak broadening of primary amine indicates hydrogen bond formation. The difference in the melting point of cocrystals when compared to pure Etravirine (265 °C) indicates interaction between the drug and the coformer which proves that first ordered transformation i.e. melting endotherm has disappeared. The difference in 2θ values of pure drug and cocrystals indicates the interaction between the drug and the coformer. Dynamic solubility and dissolution studies were also conducted by shake flask method and USP apparatus one respectively and 3.6 fold increase in the dynamic solubility were observed and in-vitro dissolution study shows four fold increase in the solubility for the ETR: TAR (1:1) cocrystals. The ETR: TAR (1:1) cocrystals shows improved solubility and dissolution as compared to the pure drug which was clearly showed by solid state characterization and dissolution studies.Keywords: dynamic solubility, Etraverine, in vitro dissolution, slurry method
Procedia PDF Downloads 354820 A Method for Reconfigurable Manufacturing Systems Customization Measurement
Authors: Jesus Kombaya, Nadia Hamani, Lyes Kermad
Abstract:
The preservation of a company’s place on the market in such aggressive competition is becoming a survival challenge for manufacturers. In this context, survivors are only those who succeed to satisfy their customers’ needs as quickly as possible. The production system should be endowed with a certain level of flexibility to eliminate or reduce the rigidity of the production systems in order to facilitate the conversion and/or the change of system’s features to produce different products. Therefore, it is essential to guarantee the quality, the speed and the flexibility to survive in this competition. According to literature, this adaptability is referred to as the notion of "change". Indeed, companies are trying to establish a more flexible and agile manufacturing system through several reconfiguration actions. Reconfiguration contributes to the extension of the manufacturing system life cycle by modifying its physical, organizational and computer characteristics according to the changing market conditions. Reconfigurability is characterized by six key elements that are: modularity, integrability, diagnosability, convertibility, scalability and customization. In order to control the production systems, it is essential for manufacturers to make good use of this capability in order to be sure that the system has an optimal and adapted level of reconfigurability that allows it to produce in accordance with the set requirements. This document develops a measure of customization of reconfigurable production systems. These measures do not only impact the production system but also impact the product design and the process design, which can therefore serve as a guide for the customization of manufactured product. A case study is presented to show the use of the proposed approach.Keywords: reconfigurable manufacturing systems, customization, measure, flexibility
Procedia PDF Downloads 126819 Recovery of Food Waste: Production of Dog Food
Authors: K. Nazan Turhan, Tuğçe Ersan
Abstract:
The population of the world is approximately 8 billion, and it increases uncontrollably and irrepressibly, leading to an increase in consumption. This situation causes crucial problems, and food waste is one of these. The Food and Agriculture Organization of the United Nations (FAO) defines food waste as the discarding or alternative utilization of food that is safe and nutritious for the consumption of humans along the entire food supply chain, from primary production to end household consumer level. In addition, according to the estimation of FAO, one-third of all food produced for human consumption is lost or wasted worldwide every year. Wasting food endangers natural resources and causes hunger. For instance, excessive amounts of food waste cause greenhouse gas emissions, contributing to global warming. Therefore, waste management has been gaining significance in the last few decades at both local and global levels due to the expected scarcity of resources for the increasing population of the world. There are several ways to recover food waste. According to the United States Environmental Protection Agency’s Food Recovery Hierarchy, food waste recovery ways are source reduction, feeding hungry people, feeding animals, industrial uses, composting, and landfill/incineration from the most preferred to the least preferred, respectively. Bioethanol, biodiesel, biogas, agricultural fertilizer and animal feed can be obtained from food waste that is generated by different food industries. In this project, feeding animals was selected as a food waste recovery method and food waste of a plant was used to provide ingredient uniformity. Grasshoppers were used as a protein source. In other words, the project was performed to develop a dog food product by recovery of the plant’s food waste after following some steps. The collected food waste and purchased grasshoppers were sterilized, dried and pulverized. Then, they were all mixed with 60 g agar-agar solution (4%w/v). 3 different aromas were added, separately to the samples to enhance flavour quality. Since there are differences in the required amounts of different species of dogs, fulfilling all nutritional needs is one of the problems. In other words, there is a wide range of nutritional needs in terms of carbohydrates, protein, fat, sodium, calcium, and so on. Furthermore, the requirements differ depending on age, gender, weight, height, and species. Therefore, the product that was developed contains average amounts of each substance so as not to cause any deficiency or surplus. On the other hand, it contains more protein than similar products in the market. The product was evaluated in terms of contamination and nutritional content. For contamination risk, detection of E. coli and Salmonella experiments were performed, and the results were negative. For the nutritional value test, protein content analysis was done. The protein contents of different samples vary between 33.68% and 26.07%. In addition, water activity analysis was performed, and the water activity (aw) values of different samples ranged between 0.2456 and 0.4145.Keywords: food waste, dog food, animal nutrition, food waste recovery
Procedia PDF Downloads 62818 Opacity Synthesis with Orwellian Observers
Authors: Moez Yeddes
Abstract:
The property of opacity is widely used in the formal verification of security in computer systems and protocols. Opacity is a general language-theoretic scheme of many security properties of systems. Opacity is parametrized with framework in which several security properties of a system can be expressed. A secret behaviour of a system is opaque if a passive attacker can never deduce its occurrence from the system observation. Instead of considering the case of static observability where the set of observable events is fixed off-line or dynamic observability where the set of observable events changes over time depending on the history of the trace, we introduce Orwellian partial observability where unobservable events are not revealed provided that downgrading events never occurs in the future of the trace. Orwellian partial observability is needed to model intransitive information flow. This Orwellian observability is knwon as ipurge function. We show in previous work how to verify opacity for regular secret is opaque for a regular language L w.r.t. an Orwellian projection is PSPACE-complete while it has been proved undecidable even for a regular language L w.r.t. a general Orwellian observation function. In this paper, we address two problems of opacification of a regular secret ϕ for a regular language L w.r.t. an Orwellian projection: Given L and a secret ϕ ∈ L, the first problem consist to compute some minimal regular super-language M of L, if it exists, such that ϕ is opaque for M and the second consists to compute the supremal sub-language M′ of L such that ϕ is opaque for M′. We derive both language-theoretic characterizations and algorithms to solve these two dual problems.Keywords: security policies, opacity, formal verification, orwellian observation
Procedia PDF Downloads 224817 Prime Graphs of Polynomials and Power Series Over Non-Commutative Rings
Authors: Walaa Obaidallah Alqarafi, Wafaa Mohammed Fakieh, Alaa Abdallah Altassan
Abstract:
Algebraic graph theory is defined as a bridge between algebraic structures and graphs. It has several uses in many fields, including chemistry, physics, and computer science. The prime graph is a type of graph associated with a ring R, where the vertex set is the whole ring R, and two vertices x and y are adjacent if either xRy=0 or yRx=0. However, the investigation of the prime graph over rings remains relatively limited. The behavior of this graph in extended rings, like R[x] and R[[x]], where R is a non-commutative ring, deserves more attention because of the wider applicability in algebra and other mathematical fields. To study the prime graphs over polynomials and power series rings, we used a combination of ring-theoretic and graph-theoretic techniques. This paper focuses on two invariants: the diameter and the girth of these graphs. Furthermore, the work discusses how the graph structures change when passing from R to R[x] and R[[x]]. In our study, we found that the set of strong zero-divisors of ring R represents the set of vertices in prime graphs. Based on this discovery, we redefined the vertices of prime graphs using the definition of strong zero divisors. Additionally, our results show that although the prime graphs of R[x] and R[[x]] are comparable to the graph of R, they have different combinatorial characteristics since these extensions contain new strong zero-divisors. In particular, we find conditions in which the diameter and girth of the graphs, as they expand from R to R[x] and R[[x]], do not change or do change. In conclusion, this study shows how extending a non-commutative ring R to R[x] and R[[x]] affects the structure of their prime graphs, particularly in terms of diameter and girth. These findings enhance the understanding of the relationship between ring extensions and graph properties.Keywords: prime graph, diameter, girth, polynomial ring, power series ring
Procedia PDF Downloads 17816 Gis Database Creation for Impacts of Domestic Wastewater Disposal on BIDA Town, Niger State Nigeria
Authors: Ejiobih Hyginus Chidozie
Abstract:
Geographic Information System (GIS) is a configuration of computer hardware and software specifically designed to effectively capture, store, update, manipulate, analyse and display and display all forms of spatially referenced information. GIS database is referred to as the heart of GIS. It has location data, attribute data and spatial relationship between the objects and their attributes. Sewage and wastewater management have assumed increased importance lately as a result of general concern expressed worldwide about the problems of pollution of the environment contamination of the atmosphere, rivers, lakes, oceans and ground water. In this research GIS database was created to study the impacts of domestic wastewater disposal methods on Bida town, Niger State as a model for investigating similar impacts on other cities in Nigeria. Results from GIS database are very useful to decision makers and researchers. Bida Town was subdivided into four regions, eight zones, and 24 sectors based on the prevailing natural morphology of the town. GIS receiver and structured questionnaire were used to collect information and attribute data from 240 households of the study area. Domestic wastewater samples were collected from twenty four sectors of the study area for laboratory analysis. ArcView 3.2a GIS software, was used to create the GIS databases for ecological, health and socioeconomic impacts of domestic wastewater disposal methods in Bida town.Keywords: environment, GIS, pollution, software, wastewater
Procedia PDF Downloads 420815 Can 3D Virtual Prototyping Conquers the Apparel Industry?
Authors: Evridiki Papachristou, Nikolaos Bilalis
Abstract:
Imagine an apparel industry where fashion design does not begin with a paper-and-pen drawing which is then translated into pattern and later to a 3D model where the designer tries out different fabrics, colours and contrasts. Instead, imagine a fashion designer in the future who produces that initial fashion drawing in a three-dimensional space and won’t leave that environment until the product is done, communicating his/her ideas with the entire development team in true to life 3D. Three-dimensional (3D) technology - while well established in many other industrial sectors like automotive, aerospace, architecture and industrial design, has only just started to open up a whole range of new opportunities for apparel designers. The paper will discuss the process of 3D simulation technology enhanced by high quality visualization of data and its capability to ensure a massive competitiveness in the market. Secondly, it will underline the most frequent problems & challenges that occur in the process chain when various partners in the production of textiles and apparel are working together. Finally, it will offer a perspective of how the Virtual Prototyping Technology will make the global textile and apparel industry change to a level where designs will be visualized on a computer and various scenarios modeled without even having to produce a physical prototype. This state-of-the-art 3D technology has been described as transformative and“disruptive”comparing to the process of the way apparel companies develop their fashion products today. It provides the benefit of virtual sampling not only for quick testing of design ideas, but also reducing process steps and having more visibility.A so called“digital asset” that can be used for other purposes such as merchandising or marketing.Keywords: 3D visualization, apparel, virtual prototyping, prototyping technology
Procedia PDF Downloads 588814 Pedagogical Practices of a Teacher in Students' Experience Tellings: A Conversation Analytic Study
Authors: Derya Duran, Christine Jacknick
Abstract:
This study explores post-task reflections in an English as a Medium of Instruction (EMI) setting, and it specifically focuses on how a teacher performs pedagogical practices such as reformulating, extending and evaluating following students’ spontaneous experience tellings in EMI classrooms. The data consist of 30 hours of video recordings from two EMI content classes, which were recorded for an academic term at a university in Turkey. The course, Guidance, is offered to fourth year undergraduate students as a compulsory course in the Department of Educational Sciences. The participants (n=78) study at the Faculty of Education, majoring in different educational departments (i.e., Computer Education and Instructional Technology, Elementary Education, Foreign Language Education). Using conversation analysis, we demonstrate that the teacher employs a variety of interactional resources to elicit (i.e., asking specific questions) and also provides (i.e., giving scientific information) as much content as possible, which also sheds light on the institutional fingerprints of the current research context. The study contributes to the existing research by unpacking articulation of personal experiences and cultivation of collaborativeness in classroom interaction. Moreover, describing the dialogic nature of these specific occasions, the study demonstrates how teacher and students address learning tasks together (collectivity), how they orient to each other turns interactionally (reciprocity), and how they keep the pedagogical focus in mind (purposefulness).Keywords: conversation analysis, English as a medium of instruction, higher education, post-task reflections
Procedia PDF Downloads 150813 Investigating Online Literacy among Undergraduates in Malaysia
Authors: Vivien Chee Pei Wei
Abstract:
Today we live in a scenario in which letters share space with images on screens that vary in size, shape, and style. The popularization of television, then the computer and now the e-readers, tablets, and smartphones made the electronic assume the role that previously was restricted to printed materials. Since the extensive use of new technologies to produce, disseminate, collect and access electronic publications began, the changes to reading has been intensified. To be able to read online, it involves more than just utilizing specific skills, strategies, and practices, but also in negotiating multiple information sources. In this study, different perspectives of digital reading are being explored in order to define the key aspects of the term. The focus is to explore how new technologies affect how undergraduates’ reading behavior, which in turn, gives readers different reading levels and engagement with the text and other support materials in the same media. There is also the importance of the relationship between reading platforms, reading levels and formats of electronic publications. The study looks at the online reading practices of about 100 undergraduates from a local university. The data collected using the survey and interviews with the respondents are analyzed thematically. Findings from this study found that both digital and traditional reading are interrelated, and should not be viewed as separate, but complementary to each other. However, reading online complicates some of the skills required by traditional reading. Consequently, in order to successfully read and comprehend multiple sources of information online, undergraduates need regular opportunities to practice and develop their skills as part of their natural reading practices.Keywords: concepts, digital reading, literacy, traditional reading
Procedia PDF Downloads 310812 Magneto-Thermo-Mechanical Analysis of Electromagnetic Devices Using the Finite Element Method
Authors: Michael G. Pantelyat
Abstract:
Fundamental basics of pure and applied research in the area of magneto-thermo-mechanical numerical analysis and design of innovative electromagnetic devices (modern induction heaters, novel thermoelastic actuators, rotating electrical machines, induction cookers, electrophysical devices) are elaborated. Thus, mathematical models of magneto-thermo-mechanical processes in electromagnetic devices taking into account main interactions of interrelated phenomena are developed. In addition, graphical representation of coupled (multiphysics) phenomena under consideration is proposed. Besides, numerical techniques for nonlinear problems solution are developed. On this base, effective numerical algorithms for solution of actual problems of practical interest are proposed, validated and implemented in applied 2D and 3D computer codes developed. Many applied problems of practical interest regarding modern electrical engineering devices are numerically solved. Investigations of the influences of various interrelated physical phenomena (temperature dependences of material properties, thermal radiation, conditions of convective heat transfer, contact phenomena, etc.) on the accuracy of the electromagnetic, thermal and structural analyses are conducted. Important practical recommendations on the choice of rational structures, materials and operation modes of electromagnetic devices under consideration are proposed and implemented in industry.Keywords: electromagnetic devices, multiphysics, numerical analysis, simulation and design
Procedia PDF Downloads 385811 An Integrated Architecture of E-Learning System to Digitize the Learning Method
Authors: M. Touhidul Islam Sarker, Mohammod Abul Kashem
Abstract:
The purpose of this paper is to improve the e-learning system and digitize the learning method in the educational sector. The learner will login into e-learning platform and easily access the digital content, the content can be downloaded and take an assessment for evaluation. Learner can get access to these digital resources by using tablet, computer, and smart phone also. E-learning system can be defined as teaching and learning with the help of multimedia technologies and the internet by access to digital content. E-learning replacing the traditional education system through information and communication technology-based learning. This paper has designed and implemented integrated e-learning system architecture with University Management System. Moodle (Modular Object-Oriented Dynamic Learning Environment) is the best e-learning system, but the problem of Moodle has no school or university management system. In this research, we have not considered the school’s student because they are out of internet facilities. That’s why we considered the university students because they have the internet access and used technologies. The University Management System has different types of activities such as student registration, account management, teacher information, semester registration, staff information, etc. If we integrated these types of activity or module with Moodle, then we can overcome the problem of Moodle, and it will enhance the e-learning system architecture which makes effective use of technology. This architecture will give the learner to easily access the resources of e-learning platform anytime or anywhere which digitizes the learning method.Keywords: database, e-learning, LMS, Moodle
Procedia PDF Downloads 185810 Foreseen the Future: Human Factors Integration in European Horizon Projects
Authors: José Manuel Palma, Paula Pereira, Margarida Tomás
Abstract:
Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0
Procedia PDF Downloads 62809 Artificial Intelligence for Generative Modelling
Authors: Shryas Bhurat, Aryan Vashistha, Sampreet Dinakar Nayak, Ayush Gupta
Abstract:
As the technology is advancing more towards high computational resources, there is a paradigm shift in the usage of these resources to optimize the design process. This paper discusses the usage of ‘Generative Design using Artificial Intelligence’ to build better models that adapt the operations like selection, mutation, and crossover to generate results. The human mind thinks of the simplest approach while designing an object, but the intelligence learns from the past & designs the complex optimized CAD Models. Generative Design takes the boundary conditions and comes up with multiple solutions with iterations to come up with a sturdy design with the most optimal parameter that is given, saving huge amounts of time & resources. The new production techniques that are at our disposal allow us to use additive manufacturing, 3D printing, and other innovative manufacturing techniques to save resources and design artistically engineered CAD Models. Also, this paper discusses the Genetic Algorithm, the Non-Domination technique to choose the right results using biomimicry that has evolved for current habitation for millions of years. The computer uses parametric models to generate newer models using an iterative approach & uses cloud computing to store these iterative designs. The later part of the paper compares the topology optimization technology with Generative Design that is previously being used to generate CAD Models. Finally, this paper shows the performance of algorithms and how these algorithms help in designing resource-efficient models.Keywords: genetic algorithm, bio mimicry, generative modeling, non-dominant techniques
Procedia PDF Downloads 147808 Cross Section Measurement for Formation of Metastable State of ¹¹¹ᵐCd through ¹¹¹Cd (γ, γ`) ¹¹¹ᵐCd Reaction Induced by Bremsstrahlung Generated through 6 MeV Electrons
Authors: Vishal D. Bharud, B. J. Patil, S. S. Dahiwale, V. N. Bhoraskar, S. D. Dhole
Abstract:
Photon induced average reaction cross section of ¹¹¹Cd (γ, γ`) ¹¹¹ᵐCd reaction was experimentally determined for the bremsstrahlung energy spectrum of 6 MeV by utilizing the activation and offline γ-ray spectrometric techniques. The 6 MeV electron accelerator Racetrack Microtron of Savitribai Phule Pune University, Pune was used for the experimental work. The bremsstrahlung spectrum generated by bombarding 6 MeV electrons on lead target was theoretically estimated by FLUKA code. Bremsstrahlung radiation can have energies exceeding the threshold of the particle emission, which is normally above 6 MeV. Photons of energies below the particle emission threshold undergo absorption into discrete energy levels, with possibility of exciting nuclei to excited state including metastable state. The ¹¹¹Cd (γ, γ`) ¹¹¹ᵐCd reaction cross sections were calculated at different energies of bombarding Photon by using the TALYS 1.8 computer code with a default parameter. The focus of the present work was to study the (γ,γ’) reaction for exciting ¹¹¹Cd nuclei to metastable states which have threshold energy below 3 MeV. The flux weighted average cross section was obtained from the theoretical values of TALYS 1.8 and TENDL 2017 and is found to be in good agreement with the present experimental cross section.Keywords: bremsstrahlung, cross section, FLUKA, TALYS-1.8
Procedia PDF Downloads 171807 Applying Artificial Neural Networks to Predict Speed Skater Impact Concussion Risk
Authors: Yilin Liao, Hewen Li, Paula McConvey
Abstract:
Speed skaters often face a risk of concussion when they fall on the ice floor and impact crash mats during practices and competitive races. Several variables, including those related to the skater, the crash mat, and the impact position (body side/head/feet impact), are believed to influence the severity of the skater's concussion. While computer simulation modeling can be employed to analyze these accidents, the simulation process is time-consuming and does not provide rapid information for coaches and teams to assess the skater's injury risk in competitive events. This research paper promotes the exploration of the feasibility of using AI techniques for evaluating skater’s potential concussion severity, and to develop a fast concussion prediction tool using artificial neural networks to reduce the risk of treatment delays for injured skaters. The primary data is collected through virtual tests and physical experiments designed to simulate skater-mat impact. It is then analyzed to identify patterns and correlations; finally, it is used to train and fine-tune the artificial neural networks for accurate prediction. The development of the prediction tool by employing machine learning strategies contributes to the application of AI methods in sports science and has theoretical involvements for using AI techniques in predicting and preventing sports-related injuries.Keywords: artificial neural networks, concussion, machine learning, impact, speed skater
Procedia PDF Downloads 108806 Optical and Structural Characterization of Rare Earth Doped Phosphate Glasses
Authors: Zélia Maria Da Costa Ludwig, Maria José Valenzuela Bell, Geraldo Henriques Da Silva, Thales Alves Faraco, Victor Rocha Da Silva, Daniel Rotmeister Teixeira, Vírgilio De Carvalho Dos Anjos, Valdemir Ludwig
Abstract:
Advances in telecommunications grow with the development of optical amplifiers based on rare earth ions. The focus has been concentrated in silicate glasses although their amplified spontaneous emission is limited to a few tens of nanometers (~ 40nm). Recently, phosphate glasses have received great attention due to their potential application in optical data transmission, detection, sensors and laser detector, waveguide and optical fibers, besides its excellent physical properties such as high thermal expansion coefficients and low melting temperature. Compared with the silica glasses, phosphate glasses provide different optical properties such as, large transmission window of infrared, and good density. Research on the improvement of physical and chemical durability of phosphate glass by addition of heavy metals oxides in P2O5 has been performed. The addition of Na2O further improves the solubility of rare earths, while increasing the Al2O3 links in the P2O5 tetrahedral results in increased durability and aqueous transition temperature and a decrease of the coefficient of thermal expansion. This work describes the structural and spectroscopic characterization of a phosphate glass matrix doped with different Er (Erbium) concentrations. The phosphate glasses containing Er3+ ions have been prepared by melt technique. A study of the optical absorption, luminescence and lifetime was conducted in order to characterize the infrared emission of Er3+ ions at 1540 nm, due to the radiative transition 4I13/2 → 4I15/2. Our results indicate that the present glass is a quite good matrix for Er3+ ions, and the quantum efficiency of the 1540 nm emission was high. A quenching mechanism for the mentioned luminescence was not observed up to 2,0 mol% of Er concentration. The Judd-Ofelt parameters, radiative lifetime and quantum efficiency have been determined in order to evaluate the potential of Er3+ ions in new phosphate glass. The parameters follow the trend as Ω2 > Ω4 > Ω6. It is well known that the parameter Ω2 is an indication of the dominant covalent nature and/or structural changes in the vicinity of the ion (short range effects), while Ω4 and Ω6 intensity parameters are long range parameters that can be related to the bulk properties such as viscosity and rigidity of the glass. From the PL measurements, no red or green upconversion was measured when pumping the samples with laser excitation at 980 nm. As future prospects: Synthesize this glass system with silver in order to determine the influence of silver nanoparticles on the Er3+ ions.Keywords: phosphate glass, erbium, luminescence, glass system
Procedia PDF Downloads 509