Search results for: spectral domain optical coherence tomography
2794 Using Artificial Neural Networks for Optical Imaging of Fluorescent Biomarkers
Authors: K. A. Laptinskiy, S. A. Burikov, A. M. Vervald, S. A. Dolenko, T. A. Dolenko
Abstract:
The article presents the results of the application of artificial neural networks to separate the fluorescent contribution of nanodiamonds used as biomarkers, adsorbents and carriers of drugs in biomedicine, from a fluorescent background of own biological fluorophores. The principal possibility of solving this problem is shown. Use of neural network architecture let to detect fluorescence of nanodiamonds against the background autofluorescence of egg white with high accuracy - better than 3 ug/ml.Keywords: artificial neural networks, fluorescence, data aggregation, biomarkers
Procedia PDF Downloads 7122793 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems
Procedia PDF Downloads 1362792 Tool for Maxillary Sinus Quantification in Computed Tomography Exams
Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina
Abstract:
The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.Keywords: maxillary sinus, support vector machine, region growing, volume quantification
Procedia PDF Downloads 5042791 Efficiency Improvement of REV-Method for Calibration of Phased Array Antennas
Authors: Daniel Hristov
Abstract:
The paper describes the principle of operation, simulation and physical validation of method for simultaneous acquisition of gain and phase states of multiple antenna elements and the corresponding feed lines across a Phased Array Antenna (PAA). The derived values for gain and phase are used for PAA-calibration. The method utilizes the Rotating-Element Electric- Field Vector (REV) principle currently used for gain and phase state estimation of single antenna element across an active antenna aperture. A significant reduction of procedure execution time is achieved with simultaneous setting of different phase delays to multiple phase shifters, followed by a single power measurement. The initial gain and phase states are calculated using spectral and correlation analysis of the measured power series.Keywords: antenna, antenna arrays, calibration, phase measurement, power measurement
Procedia PDF Downloads 1392790 Embodying the Ecological Validity in Creating the Sustainable Public Policy: A Study in Strengthening the Green Economy in Indonesia
Authors: Gatot Dwi Hendro, Hayyan ul Haq
Abstract:
This work aims to explore the strategy in embodying the ecological validity in creating the sustainability of public policy, particularly in strengthening the green economy in Indonesia. This green economy plays an important role in supporting the national development in Indonesia, as it is a part of the national policy that posits the primary priority in Indonesian governance. The green economy refers to the national development covering strategic natural resources, such as mining, gold, oil, coal, forest, water, marine, and the other supporting infrastructure for products and distribution, such as fabrics, roads, bridges, and so forth. Thus, all activities in those national development should consider the sustainability. This sustainability requires the strong commitment of the national and regional government, as well as the local governments to put the ecology as the main requirement for issuing any policy, such as licence in mining production, and developing and building new production and supporting infrastructures for optimising the national resources. For that reason this work will focus on the strategy how to embody the ecological values and norms in the public policy. In detail, this work will offer the method, i.e. legal techniques, in visualising and embodying the norms and public policy that valid ecologically. This ecological validity is required in order to maintain and sustain our collective life.Keywords: ecological validity, sustainable development, coherence, Indonesian Pancasila values, environment, marine
Procedia PDF Downloads 4872789 Stability of Essential Oils in Pang-Rum by Gas Chromatography-Mass Spectrometry
Authors: K. Jarmkom, P. Eakwaropas, W. Khobjai, S. Techaeoi
Abstract:
Ancient Thai perfumed powder was used as a fragrance for clothing, food, and the body. Plant-based natural Thai perfume products are known as Pang-Rum. The objective of this study was to evaluate the stability of essential oils after six months of incubation. The chemical compositions were determined by gas chromatography-mass spectrometry (GC-MS), in terms of the qualitative composition of the isolated essential oil. The isolation of the essential oil of natural products by incubate sample for 5 min at 40 ºC is described. The volatile components were identified by percentage of total peak areas comparing their retention times of GC chromatograph with NIST mass spectral library. The results show no significant difference in the seven chromatograms of perfumed powder (Pang-Rum) both with binder and without binder. Further identification was done by GC-MS. Some components of Pang-Rum with/without binder were changed by temperature and time.Keywords: GC-MS analysis, essential oils, stability, Pang-Rum
Procedia PDF Downloads 2732788 Effect of Electropolymerization Method in the Charge Transfer Properties and Photoactivity of Polyaniline Photoelectrodes
Authors: Alberto Enrique Molina Lozano, María Teresa Cortés Montañez
Abstract:
Polyaniline (PANI) photoelectrodes were electrochemically synthesized through electrodeposition employing three techniques: chronoamperometry (CA), cyclic voltammetry (CV), and potential pulse (PP) methods. The substrate used for electrodeposition was a fluorine-doped tin oxide (FTO) glass with dimensions of 2.5 cm x 1.3 cm. Subsequently, structural and optical characterization was conducted utilizing Fourier-transform infrared (FTIR) spectroscopy and UV-visible (UV-vis) spectroscopy, respectively. The FTIR analysis revealed variations in the molar ratio of benzenoid to quinonoid rings within the PANI polymer matrix, indicative of differing oxidation states arising from the distinct electropolymerization methodologies employed. In the optical characterization, differences in the energy band gap (Eg) values and positions of the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) were observed, attributable to variations in doping levels and structural irregularities introduced during the electropolymerization procedures. To assess the charge transfer properties of the PANI photoelectrodes, electrochemical impedance spectroscopy (EIS) experiments were carried out within a 0.1 M sodium sulfate (Na₂SO₄) electrolyte. The results displayed a substantial decrease in charge transfer resistance with the PANI coatings compared to uncoated substrates, with PANI obtained through cyclic voltammetry (CV) presenting the lowest charge transfer resistance, contrasting PANI obtained via chronoamperometry (CA) and potential pulses (PP). Subsequently, the photoactive response of the PANI photoelectrodes was measured through linear sweep voltammetry (LSV) and chronoamperometry. The photoelectrochemical measurements revealed a discernible photoactivity in all PANI-coated electrodes. However, PANI electropolymerized through CV displayed the highest photocurrent. Interestingly, PANI derived from chronoamperometry (CA) exhibited the highest degree of stable photocurrent over an extended temporal interval.Keywords: PANI, photocurrent, photoresponse, charge separation, recombination
Procedia PDF Downloads 662787 Extension of Positive Linear Operator
Authors: Manal Azzidani
Abstract:
This research consideres the extension of special functions called Positive Linear Operators. the bounded linear operator which defined from normed space to Banach space will extend to the closure of the its domain, And extend identified linear functional on a vector subspace by Hana-Banach theorem which could be generalized to the positive linear operators.Keywords: extension, positive operator, Riesz space, sublinear function
Procedia PDF Downloads 5182786 Frequency Modulation in Vibro-Acoustic Modulation Method
Authors: D. Liu, D. M. Donskoy
Abstract:
The vibroacoustic modulation method is based on the modulation effect of high-frequency ultrasonic wave (carrier) by low-frequency vibration in the presence of various defects, primarily contact-type such as cracks, delamination, etc. The presence and severity of the defect are measured by the ratio of the spectral sidebands and the carrier in the spectrum of the modulated signal. This approach, however, does not differentiate between amplitude and frequency modulations, AM and FM, respectfully. It was experimentally shown that both modulations could be present in the spectrum, yet each modulation may be associated with different physical mechanisms. AM mechanisms are quite well understood and widely covered in the literature. This paper is a first attempt to explain the generation mechanisms of FM and its correlation with the flaw properties. Here we proposed two possible mechanisms leading to FM modulation based on nonlinear local defect resonance and dynamic acousto-elastic models.Keywords: non-destructive testing, nonlinear acoustics, structural health monitoring, acousto-elasticity, local defect resonance
Procedia PDF Downloads 1562785 A Combined Fiber-Optic Surface Plasmon Resonance and Ta2O5: rGO Nanocomposite Synergistic Scheme for Trace Detection of Insecticide Fenitrothion
Authors: Ravi Kant, Banshi D. Gupta
Abstract:
The unbridled application of insecticides to enhance agricultural yield has become a matter of grave concern to both the environment and the human health and, thus pose a potential threat to sustainable development. Fenitrothion is an extensively used organophosphate insecticide whose residues are reported to be extremely toxic for birds, humans and aquatic life. A sensitive, swift and accurate detection protocol for fenitrothion is, thus, highly demanded. In this work, we report an SPR based fiber optic sensor for the detection of fenitrothion, where a nanocomposite arrangement of Ta2O5 and reduced graphene oxide (rGO) (Ta₂O₅: rGO) decorated on silver coated unclad core region of an optical fiber forms the sensing channel. A nanocomposite arrangement synergistically integrates the properties of involved components and consequently furnishes a conducive framework for sensing applications. The modification of the dielectric function of the sensing layer on exposure to fenitrothion solutions of diverse concentration forms the sensing mechanism. This modification is reflected in terms of the shift in resonance wavelength. Experimental variables such as the concentration of rGO in the nanocomposite configuration, dip time of silver coated fiber optic probe for deposition of sensing layer and influence of pH on the performance of the sensor have been optimized to extract the best performance of the sensor. SPR studies on the optimized sensing probe reveal the high sensitivity, wide operating range and good reproducibility of the fabricated sensor, which unveil the promising utility of Ta₂O₅: rGO nanocomposite framework for developing an efficient detection methodology for fenitrothion. FOSPR approach in cooperation with nanomaterials projects the present work as a beneficial approach for fenitrothion detection by imparting numerous useful advantages such as sensitivity, selectivity, compactness and cost-effectiveness.Keywords: surface plasmon resonance, optical fiber, sensor, fenitrothion
Procedia PDF Downloads 2102784 Acute Superior Mesenteric Artery Thrombosis Leading to Pneumatosis Intestinalis and Portal Venous Gas in a Young Adult after COVID-19 Vaccination
Authors: Prakash Dhakal
Abstract:
Hepatic portal venous gas (HPVG) is diagnosed via computed tomography due to unusual imaging features. HPVG, when linked with pneumatosis intestinalis, has a high mortality rate and requires urgent intervention. We present a case of a 26-year-old young adult with superior mesenteric artery thrombosis who presented with severe abdominal pain. He had a history of COVID vaccination (First dose of COVISHILED) 15 days back. On imaging, HPVG and pneumatosis intestinalis were seen owing to the urgent intervention of the patient. The reliable interpretation of the imaging findings along with quick intervention led to a favorable outcome in our case. Herein we present a thorough review of the patient with a history of COVID-19 vaccination with superior mesenteric artery thrombosis leading to bowel ischemia and hepatic portal venous gas. The patient underwent subtotal small bowel resection.Keywords: COVID-19 vaccination, SMA thrombosis, portal venoius gas, pneumatosis intestinalis
Procedia PDF Downloads 912783 Defuzzification of Periodic Membership Function on Circular Coordinates
Authors: Takashi Mitsuishi, Koji Saigusa
Abstract:
This paper presents circular polar coordinates transformation of periodic fuzzy membership function. The purpose is identification of domain of periodic membership functions in consequent part of IF-THEN rules. The proposed methods are applied to the simple color construct system.Keywords: periodic membership function, polar coordinates transformation, defuzzification, circular coordinates
Procedia PDF Downloads 3122782 Module Valuations and Quasi-Valuations
Authors: Shai Sarussi
Abstract:
Suppose F is a field with valuation v and valuation domain Oᵥ, and R is an Oᵥ-algebra. It is known that there exists a filter quasi-valuation on R; the existence of a quasi-valuation yields several important connections between Oᵥ and R, in particular with respect to their prime spectra. In this paper, the notion of a module valuation is introduced. It is shown that any torsion-free module over Oᵥ has an induced module valuation. Moreover, several results connecting the filter quasi-valuation and module valuations are presented.Keywords: valuations, quasi-valuations, prime spectrum, algebras over valuation domains
Procedia PDF Downloads 2262781 Examining Private Law's Role in Promoting Human Rights: Prospects, Obstacles, and Safeguarding Challenges
Authors: Laura Cami Vorpsi
Abstract:
This research paper examines the potential of private law as a means to promote and safeguard human rights while also addressing the associated challenges and limitations of adopting such an approach. Historically, private law mechanisms, namely contract law, tort law, and property law, have been employed to govern and oversee private relationships and transactions. Nevertheless, it is increasingly acknowledged that private law can also assume a significant role in safeguarding and advancing human rights, particularly in circumstances where the safeguards provided by public law are insufficient or inaccessible. This study assesses the benefits associated with the utilization of private law as a complementary measure to public law safeguards. These advantages encompass enhanced efficacy and efficiency of remedies, as well as the capacity to customize solutions to suit the unique requirements and circumstances of individuals. Nevertheless, the present study also considers the constraints associated with private law mechanisms, such as the financial and procedural intricacies of legal proceedings, the possibility of imbalanced negotiation power, and the potential to worsen pre-existing disparities and systemic inequities. The paper posits that the adoption of a private law-based approach to human rights necessitates a meticulous design and implementation process in order to mitigate potential risks and optimize the advantages. In conclusion, this study examines the ramifications of these discoveries on policy and practice, highlighting the necessity for heightened awareness and education regarding the capacity of private law to advance and safeguard human rights. Additionally, it underscores the significance of establishing efficient and easily accessible mechanisms for upholding human rights within the private domain. The paper concludes by providing recommendations for future research in this domain, specifically emphasizing the necessity for additional empirical investigations to assess the efficacy and consequences of private law-oriented strategies in safeguarding human rights.Keywords: private law, human rights, promoting, protecting, access to justice
Procedia PDF Downloads 782780 Investigation of Delivery of Triple Play Services
Authors: Paramjit Mahey, Monica Sharma, Jasbinder Singh
Abstract:
Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 5422779 The Lasting Impact of Parental Conflict on Self-Differentiation of Young Adult OffspringThe Lasting Impact of Parental Conflict on Self-Differentiation of Young Adult Offspring
Authors: A. Benedetto, P. Wong, N. Papouchis, L. W. Samstag
Abstract:
Bowen’s concept of self-differentiation describes a healthy balance of autonomy and intimacy in close relationships, and it has been widely researched in the context of family dynamics. The current study aimed to clarify the impact of family dysfunction on self-differentiation by specifically examining conflict between parents, and by including young adults, an underexamined age group in this domain (N = 300; ages 18 to 30). It also identified a protective factor for offspring from conflictual homes. The 300 young adults (recruited online through Mechanical Turk) completed the Differentiation of Self Inventory (DSI), the Children’s Perception of Interparental Conflict Scale (CPIC), the Parental Bonding Instrument (PBI), and the Symptom Checklist-90-Revised (SCL-90-R). Analyses revealed that interparental conflict significantly impairs self-differentiation among young adult offspring. Specifically, exposure to parental conflict showed a negative impact on young adults’ sense of self, emotional reactivity, and interpersonal cutoff in the context of close relationships. Parental conflict was also related to increased psychological distress among offspring. Surprisingly, the study found that parental divorce does not impair self-differentiation in offspring, demonstrating the distinctly harmful impact of conflict. These results clarify a unique type of family dysfunction that impairs self-differentiation, specifically in distinguishing it from parental divorce; it examines young adults, a critical age group not previously examined in this domain; and it identifies a moderating protective factor (a strong parent-child bond) for offspring exposed to conflict. Overall, results suggest the need for modifications in parental behavior in order to protect offspring at risk of lasting emotional and interpersonal damage.Keywords: divorce, family dysfunction, parental conflict, parent-child bond, relationships, self-differentiation, young adults
Procedia PDF Downloads 1582778 Broadband Platinum Disulfide Based Saturable Absorber Used for Optical Fiber Mode Locking Lasers
Authors: Hui Long, Chun Yin Tang, Ping Kwong Cheng, Xin Yu Wang, Wayesh Qarony, Yuen Hong Tsang
Abstract:
Two dimensional (2D) materials have recently attained substantial research interest since the discovery of graphene. However, the zero-bandgap feature of the graphene limits its nonlinear optical applications, e.g., saturable absorption for these applications require strong light-matter interaction. Nevertheless, the excellent optoelectronic properties, such as broad tunable bandgap energy and high carrier mobility of Group 10 transition metal dichalcogenides 2D materials, e.g., PtS2 introduce new degree of freedoms in the optoelectronic applications. This work reports our recent research findings regarding the saturable absorption property of PtS2 layered 2D material and its possibility to be used as saturable absorber (SA) for ultrafast mode locking fiber laser. The demonstration of mode locking operation by using the fabricated PtS2 as SA will be discussed. The PtS2/PVA SA used in this experiment is made up of some few layered PtS2 nanosheets fabricated via a simple ultrasonic liquid exfoliation. The operational wavelength located at ~1 micron is demonstrated from Yb-doped mode locking fiber laser ring cavity by using the PtS2 SA. The fabricated PtS2 saturable absorber offers strong nonlinear properties, and it is capable of producing regular mode locking laser pulses with pulse to pulse duration matched with the round-trip cavity time. The results confirm successful mode locking operation achieved by the fabricated PtS2 material. This work opens some new opportunities for these PtS2 materials for the ultrafast laser generation. Acknowledgments: This work is financially supported by Shenzhen Science and Technology Innovation Commission (JCYJ20170303160136888) and the Research Grants Council of Hong Kong, China (GRF 152109/16E, PolyU code: B-Q52T).Keywords: platinum disulfide, PtS2, saturable absorption, saturable absorber, mode locking laser
Procedia PDF Downloads 1902777 Performance Analysis of Artificial Neural Network Based Land Cover Classification
Authors: Najam Aziz, Nasru Minallah, Ahmad Junaid, Kashaf Gul
Abstract:
Landcover classification using automated classification techniques, while employing remotely sensed multi-spectral imagery, is one of the promising areas of research. Different land conditions at different time are captured through satellite and monitored by applying different classification algorithms in specific environment. In this paper, a SPOT-5 image provided by SUPARCO has been studied and classified in Environment for Visual Interpretation (ENVI), a tool widely used in remote sensing. Then, Artificial Neural Network (ANN) classification technique is used to detect the land cover changes in Abbottabad district. Obtained results are compared with a pixel based Distance classifier. The results show that ANN gives the better overall accuracy of 99.20% and Kappa coefficient value of 0.98 over the Mahalanobis Distance Classifier.Keywords: landcover classification, artificial neural network, remote sensing, SPOT 5
Procedia PDF Downloads 5492776 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 1032775 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 252774 Enhancing Model Interoperability and Reuse by Designing and Developing a Unified Metamodel Standard
Authors: Arash Gharibi
Abstract:
Mankind has always used models to solve problems. Essentially, models are simplified versions of reality, whose need stems from having to deal with complexity; many processes or phenomena are too complex to be described completely. Thus a fundamental model requirement is that it contains the characteristic features that are essential in the context of the problem to be solved or described. Models are used in virtually every scientific domain to deal with various problems. During the recent decades, the number of models has increased exponentially. Publication of models as part of original research has traditionally been in in scientific periodicals, series, monographs, agency reports, national journals and laboratory reports. This makes it difficult for interested groups and communities to stay informed about the state-of-the-art. During the modeling process, many important decisions are made which impact the final form of the model. Without a record of these considerations, the final model remains ill-defined and open to varying interpretations. Unfortunately, the details of these considerations are often lost or in case there is any existing information about a model, it is likely to be written intuitively in different layouts and in different degrees of detail. In order to overcome these issues, different domains have attempted to implement their own approaches to preserve their models’ information in forms of model documentation. The most frequently cited model documentation approaches show that they are domain specific, not to applicable to the existing models and evolutionary flexibility and intrinsic corrections and improvements are not possible with the current approaches. These issues are all because of a lack of unified standards for model documentation. As a way forward, this research will propose a new standard for capturing and managing models’ information in a unified way so that interoperability and reusability of models become possible. This standard will also be evolutionary, meaning members of modeling realm could contribute to its ongoing developments and improvements. In this paper, the current 3 of the most common metamodels are reviewed and according to pros and cons of each, a new metamodel is proposed.Keywords: metamodel, modeling, interoperability, reuse
Procedia PDF Downloads 1992773 Post-Earthquake Road Damage Detection by SVM Classification from Quickbird Satellite Images
Authors: Moein Izadi, Ali Mohammadzadeh
Abstract:
Detection of damaged parts of roads after earthquake is essential for coordinating rescuers. In this study, an approach is presented for the semi-automatic detection of damaged roads in a city using pre-event vector maps and both pre- and post-earthquake QuickBird satellite images. Damage is defined in this study as the debris of damaged buildings adjacent to the roads. Some spectral and texture features are considered for SVM classification step to detect damages. Finally, the proposed method is tested on QuickBird pan-sharpened images from the Bam City earthquake and the results show that an overall accuracy of 81% and a kappa coefficient of 0.71 are achieved for the damage detection. The obtained results indicate the efficiency and accuracy of the proposed approach.Keywords: SVM classifier, disaster management, road damage detection, quickBird images
Procedia PDF Downloads 6252772 [Keynote Talk]: Heavy Metals in Marine Sediments of Gulf of Izmir
Authors: E. Kam, Z. U. Yümün, D. Kurt
Abstract:
In this study, sediment samples were collected from four sampling sites located on the shores of the Gulf of İzmir. In the samples, Cd, Co, Cr, Cu, Mn, Ni, Pb and Zn concentrations were determined using inductively coupled, plasma-optical emission spectrometry (ICP-OES). The average heavy metal concentrations were: Cd < LOD (limit of detection); Co 14.145 ± 0.13 μg g−1; Cr 112.868 ± 0.89 μg g−1; Cu 34.045 ± 0.53 μg g−1; Mn 481.43 ± 7.65 μg g−1; Ni 76.538 ± 3.81 μg g−1; Pb 11.059 ± 0.53 μg g−1 and Zn 140.133 ± 1.37 μg g−1, respectively. The results were compared with the average abundances of these elements in the Earth’s crust. The measured heavy metal concentrations can serve as reference values for further studies carried out on the shores of the Aegean Sea.Keywords: heavy metal, Aegean Sea, ICP-OES, sediment
Procedia PDF Downloads 1852771 Characterization of Leakage Current on the Surface of Porcelain Insulator under Contaminated Conditions
Authors: Hocine Terrab , Abdelhafid Bayadi, Adel Kara, Ayman El-Hag
Abstract:
Insulator flashover under polluted conditions has been a serious threat on the reliability of power systems. It is known that the flashover process is mainly affected by the environmental conditions such as; the pollution level and humidity. Those are the essential parameters influencing the wetting process. This paper presents an investigation of the characteristics of leakage current (LC) developed on the surface of porcelain insulator at contaminated conditions under AC voltage. The study is done in an artificial fog chamber and the LC is characterized for different stages; dry, wetted and presence of discharge activities. Time-frequency and spectral analysis are adopted to calculate the evolution of LC characteristics with various stages prior to flashover occurrence. The preliminary results could be used in analysing the LC to develop more effective diagnosis of early signs of dry band arcing as an indication for insulation washing.Keywords: flashover, harmonic components, leakage current, phase angle, statistical analysis
Procedia PDF Downloads 4362770 Study of the Morphological and Optical Properties of Nanometric NiO
Authors: Nassima Hamzaoui, Mostefa Ghamnia
Abstract:
Nanoscale thin films of pure and Mn-doped Nickel oxide (NiO) were prepared by dissolving nickel chloride hexahydrate (NiCl2, 6H2O) and manganese chloride tetrahydrate (MnCl2,4H2O) under experimental conditions. The resulting solution was stirred at room temperature for 30 OC minutes in order to obtain homogeneity. The solution was sprayed onto heated glass substrates. The films obtained were characterized by X-ray diffraction to verify crystallinity. Atomic force microscopy (AFM) reveals surface topographical structure. UV-visible spectroscopy shows good transparency of the NiO layers.Keywords: films, NiO, AFM, X-ray diffraction
Procedia PDF Downloads 642769 A Green Method for Selective Spectrophotometric Determination of Hafnium(IV) with Aqueous Extract of Ficus carica Tree Leaves
Authors: A. Boveiri Monji, H. Yousefnia, M. Haji Hosseini, S. Zolghadri
Abstract:
A clean spectrophotometric method for the determination of hafnium by using a green reagent, acidic extract of Ficus carica tree leaves is developed. In 6-M hydrochloric acid, hafnium reacts with this reagent to form a yellow product. The formed product shows maximum absorbance at 421 nm with a molar absorptivity value of 0.28 × 104 l mol⁻¹ cm⁻¹, and the method was linear in the 2-11 µg ml⁻¹ concentration range. The detection limit value was found to be 0.312 µg ml⁻¹. Except zirconium and iron, the selectivity was good, and most of the ions did not show any significant spectral interference at concentrations up to several hundred times. The proposed method was green, simple, low cost, and selective.Keywords: spectrophotometric determination, Ficus caricatree leaves, synthetic reagents, hafnium
Procedia PDF Downloads 2122768 Understanding the Information in Principal Component Analysis of Raman Spectroscopic Data during Healing of Subcritical Calvarial Defects
Authors: Rafay Ahmed, Condon Lau
Abstract:
Bone healing is a complex and sequential process involving changes at the molecular level. Raman spectroscopy is a promising technique to study bone mineral and matrix environments simultaneously. In this study, subcritical calvarial defects are used to study bone composition during healing without discomposing the fracture. The model allowed to monitor the natural healing of bone avoiding mechanical harm to the callus. Calvarial defects were created using 1mm burr drill in the parietal bones of Sprague-Dawley rats (n=8) that served in vivo defects. After 7 days, their skulls were harvested after euthanizing. One additional defect per sample was created on the opposite parietal bone using same calvarial defect procedure to serve as control defect. Raman spectroscopy (785 nm) was established to investigate bone parameters of three different skull surfaces; in vivo defects, control defects and normal surface. Principal component analysis (PCA) was utilized for the data analysis and interpretation of Raman spectra and helped in the classification of groups. PCA was able to distinguish in vivo defects from normal surface and control defects. PC1 shows that the major variation at 958 cm⁻¹, which corresponds to ʋ1 phosphate mineral band. PC2 shows the major variation at 1448 cm⁻¹ which is the characteristic band of CH2 deformation and corresponds to collagens. Raman parameters, namely, mineral to matrix ratio and crystallinity was found significantly decreased in the in vivo defects compared to surface and controls. Scanning electron microscope and optical microscope images show the formation of newly generated matrix by means of bony bridges of collagens. Optical profiler shows that surface roughness increased by 30% from controls to in vivo defects after 7 days. These results agree with Raman assessment parameters and confirm the new collagen formation during healing.Keywords: Raman spectroscopy, principal component analysis, calvarial defects, tissue characterization
Procedia PDF Downloads 2242767 Text as Reader Device Improving Subjectivity on the Role of Attestation between Interpretative Semiotics and Discursive Linguistics
Authors: Marco Castagna
Abstract:
Proposed paper is aimed to inquire about the relation between text and reader, focusing on the concept of ‘attestation’. Indeed, despite being widely accepted in semiotic research, even today the concept of text remains uncertainly defined. So, it seems to be undeniable that what is called ‘text’ offers an image of internal cohesion and coherence, that makes it possible to analyze it as an object. Nevertheless, this same object remains problematic when it is pragmatically activated by the act of reading. In fact, as for the T.A.R:D.I.S., that is the unique space-temporal vehicle used by the well-known BBC character Doctor Who in his adventures, every text appears to its own readers not only “bigger inside than outside”, but also offering spaces that change according to the different traveller standing in it. In a few words, as everyone knows, this singular condition raises the questions about the gnosiological relation between text and reader. How can a text be considered the ‘same’, even if it can be read in different ways by different subjects? How can readers can be previously provided with knowledge required for ‘understanding’ a text, but at the same time learning something more from it? In order to explain this singular condition it seems useful to start thinking about text as a device more than an object. In other words, this unique status is more clearly understandable when ‘text’ ceases to be considered as a box designed to move meaning from a sender to a recipient (marking the semiotic priority of the “code”) and it starts to be recognized as performative meaning hypothesis, that is discursively configured by one or more forms and empirically perceivable by means of one or more substances. Thus, a text appears as a “semantic hanger”, potentially offered to the “unending deferral of interpretant", and from time to time fixed as “instance of Discourse”. In this perspective, every reading can be considered as an answer to the continuous request for confirming or denying the meaning configuration (the meaning hypothesis) expressed by text. Finally, ‘attestation’ is exactly what regulates this dynamic of request and answer, through which the reader is able to confirm his previous hypothesis on reality or maybe acquire some new ones.Proposed paper is aimed to inquire about the relation between text and reader, focusing on the concept of ‘attestation’. Indeed, despite being widely accepted in semiotic research, even today the concept of text remains uncertainly defined. So, it seems to be undeniable that what is called ‘text’ offers an image of internal cohesion and coherence, that makes it possible to analyze it as an object. Nevertheless, this same object remains problematic when it is pragmatically activated by the act of reading. In fact, as for the T.A.R:D.I.S., that is the unique space-temporal vehicle used by the well-known BBC character Doctor Who in his adventures, every text appears to its own readers not only “bigger inside than outside”, but also offering spaces that change according to the different traveller standing in it. In a few words, as everyone knows, this singular condition raises the questions about the gnosiological relation between text and reader. How can a text be considered the ‘same’, even if it can be read in different ways by different subjects? How can readers can be previously provided with knowledge required for ‘understanding’ a text, but at the same time learning something more from it? In order to explain this singular condition it seems useful to start thinking about text as a device more than an object. In other words, this unique status is more clearly understandable when ‘text’ ceases to be considered as a box designed to move meaning from a sender to a recipient (marking the semiotic priority of the “code”) and it starts to be recognized as performative meaning hypothesis, that is discursively configured by one or more forms and empirically perceivable by means of one or more substances. Thus, a text appears as a “semantic hanger”, potentially offered to the “unending deferral of interpretant", and from time to time fixed as “instance of Discourse”. In this perspective, every reading can be considered as an answer to the continuous request for confirming or denying the meaning configuration (the meaning hypothesis) expressed by text. Finally, ‘attestation’ is exactly what regulates this dynamic of request and answer, through which the reader is able to confirm his previous hypothesis on reality or maybe acquire some new ones.Keywords: attestation, meaning, reader, text
Procedia PDF Downloads 2372766 Mechanical Properties and Characterization of Ti–6Al–4V Alloy Diffused by Molybdenum
Authors: Alaeddine Kaouka
Abstract:
The properties and characterization of Ti-6Al-4V alloys with different contents of Mo were investigated. Microstructure characterization and hardness are considered. The alloy structure was characterized by X-ray diffraction, SEM and optical microscopy. The results showed that the addition of Mo stabilized the β-phase in the treated solution condition. The Mo element added to titanium alloys changes the lattice parameters of phases. Microstructural observations indicate an obvious reduction in the prior grain size. The hardness has increased with the increase in β-phase stability, while Young’s modulus and ductility have decreased.Keywords: characterization, mechanical properties, molybdenum, titanium alloy
Procedia PDF Downloads 2632765 Identification of Text Domains and Register Variation through the Analysis of Lexical Distribution in a Bangla Mass Media Text Corpus
Authors: Mahul Bhattacharyya, Niladri Sekhar Dash
Abstract:
The present research paper is an experimental attempt to investigate the nature of variation in the register in three major text domains, namely, social, cultural, and political texts collected from the corpus of Bangla printed mass media texts. This present study uses a corpus of a moderate amount of Bangla mass media text that contains nearly one million words collected from different media sources like newspapers, magazines, advertisements, periodicals, etc. The analysis of corpus data reveals that each text has certain lexical properties that not only control their identity but also mark their uniqueness across the domains. At first, the subject domains of the texts are classified into two parameters namely, ‘Genre' and 'Text Type'. Next, some empirical investigations are made to understand how the domains vary from each other in terms of lexical properties like both function and content words. Here the method of comparative-cum-contrastive matching of lexical load across domains is invoked through word frequency count to track how domain-specific words and terms may be marked as decisive indicators in the act of specifying the textual contexts and subject domains. The study shows that the common lexical stock that percolates across all text domains are quite dicey in nature as their lexicological identity does not have any bearing in the act of specifying subject domains. Therefore, it becomes necessary for language users to anchor upon certain domain-specific lexical items to recognize a text that belongs to a specific text domain. The eventual findings of this study confirm that texts belonging to different subject domains in Bangla news text corpus clearly differ on the parameters of lexical load, lexical choice, lexical clustering, lexical collocation. In fact, based on these parameters, along with some statistical calculations, it is possible to classify mass media texts into different types to mark their relation with regard to the domains they should actually belong. The advantage of this analysis lies in the proper identification of the linguistic factors which will give language users a better insight into the method they employ in text comprehension, as well as construct a systemic frame for designing text identification strategy for language learners. The availability of huge amount of Bangla media text data is useful for achieving accurate conclusions with a certain amount of reliability and authenticity. This kind of corpus-based analysis is quite relevant for a resource-poor language like Bangla, as no attempt has ever been made to understand how the structure and texture of Bangla mass media texts vary due to certain linguistic and extra-linguistic constraints that are actively operational to specific text domains. Since mass media language is assumed to be the most 'recent representation' of the actual use of the language, this study is expected to show how the Bangla news texts reflect the thoughts of the society and how they leave a strong impact on the thought process of the speech community.Keywords: Bangla, corpus, discourse, domains, lexical choice, mass media, register, variation
Procedia PDF Downloads 175