Search results for: refractive errors
782 The Impact of Ultrasonic Field to Increase the Biodegradability of Leachate from The Landfill
Authors: Kwarciak-Kozlowska A., Slawik-Dembiczak L., Galwa-Widera M.
Abstract:
Complex and variable during operation of the landfill leachate composition prevents the use of a single universal method of their purification. Due to the presence of difficult biodegradable these substances in the wastewater, cleaning of them often requires the use of biological methods (activated sludge or anaerobic digestion), also often supporting by physicochemical processes. Currently, more attention is paid to the development of unconventional methods of disposal of sewage m.in ultleniania advanced methods including the use of ultrasonic waves. It was assumed that the ultrasonic waves induce change in the structure of organic compounds and contribute to the acceleration of biodegradability, including refractive substances in the leachate, so that will increase the effectiveness of their treatment in biological processes. We observed a marked increase in BOD leachate when subjected to the action of utradźwięowego. Ratio BOD / COD was 27% higher compared to the value of this ratio for leachate nienadźwiękawianych. It was found that the process of sonification leachate clearly influenced the formation and release of aliphatic compounds. These changes suggest a possible violation of the chemical structure of organic compounds in the leachate thereby give compounds of the chemical structure more susceptible to biodegradation.Keywords: IR spectra, landfill leachate, organic pollutants, ultrasound
Procedia PDF Downloads 429781 Collaboration During Planning and Reviewing in Writing: Effects on L2 Writing
Authors: Amal Sellami, Ahlem Ammar
Abstract:
Writing is acknowledged to be a cognitively demanding and complex task. Indeed, the writing process is composed of three iterative sub-processes, namely planning, translating (writing), and reviewing. Not only do second or foreign language learners need to write according to this process, but they also need to respect the norms and rules of language and writing in the text to-be-produced. Accordingly, researchers have suggested to approach writing as a collaborative task in order to al leviate its complexity. Consequently, collaboration has been implemented during the whole writing process or only during planning orreviewing. Researchers report that implementing collaboration during the whole process might be demanding in terms of time in comparison to individual writing tasks. Consequently, because of time constraints, teachers may avoid it. For this reason, it might be pedagogically more realistic to limit collaboration to one of the writing sub-processes(i.e., planning or reviewing). However, previous research implementing collaboration in planning or reviewing is limited and fails to explore the effects of the seconditionson the written text. Consequently, the present study examines the effects of collaboration in planning and collaboration in reviewing on the written text. To reach this objective, quantitative as well as qualitative methods are deployed to examine the written texts holistically and in terms of fluency, complexity, and accuracy. Participants of the study include 4 pairs in each group (n=8). They participated in two experimental conditions, which are: (1) collaborative planning followed by individual writing and individual reviewing and (2) individual planning followed by individual writing and collaborative reviewing. The comparative research findings indicate that while collaborative planning resulted in better overall text quality (precisely better content and organization ratings), better fluency, better complexity, and fewer lexical errors, collaborative reviewing produces better accuracy and less syntactical and mechanical errors. The discussion of the findings suggests the need to conduct more comparative research in order to further explore the effects of collaboration in planning or in reviewing. Pedagogical implications of the current study include advising teachers to choose between implementing collaboration in planning or in reviewing depending on their students’ need and what they need to improve.Keywords: collaboration, writing, collaborative planning, collaborative reviewing
Procedia PDF Downloads 99780 Microwave Transmission through Metamaterial Based on Permalloy Flakes under Magnetic Resonance and Antiresonance Conditions
Authors: Anatoly B. Rinkevich, Eugeny A. Kuznetsov, Yuri I. Ryabkov
Abstract:
Transmission of electromagnetic waves through a plate of metamaterial based on permalloy flakes and reflection from the plate is investigated. The metamaterial is prepared of permalloy flakes sized from few to 50μ placed into epoxy-amine matrix. Two series of metamaterial samples are under study with the volume portion of permalloy particles 15% and 30%. There is no direct electrical contact between permalloy particles. Microwave measurements have been carried out at frequencies of 12 to 30 GHz in magnetic fields up to 12 kOe. Sharp decrease of transmitted wave is observed under ferromagnetic resonance condition caused by absorption. Under magnetic antiresonance condition, in opposite, maximum of reflection coefficient is observed at frequencies exceeding 30 GHz. For example, for metamaterial sample with the volume portion of permalloy of 30%, the variation of reflection coefficient in magnetic field reaches 300%. These high variations are of interest to develop magnetic field driven microwave devices. Magnetic field variations of refractive index are also estimated.Keywords: ferromagnetic resonance, magnetic antiresonance, microwave metamaterials, permalloy flakes, transmission and reflection coefficients
Procedia PDF Downloads 140779 Leveraging Remote Assessments and Central Raters to Optimize Data Quality in Rare Neurodevelopmental Disorders Clinical Trials
Authors: Pamela Ventola, Laurel Bales, Sara Florczyk
Abstract:
Background: Fully remote or hybrid administration of clinical outcome measures in rare neurodevelopmental disorders trials is increasing due to the ongoing pandemic and recognition that remote assessments reduce the burden on families. Many assessments in rare neurodevelopmental disorders trials are complex; however, remote/hybrid trials readily allow for the use of centralized raters to administer and score the scales. The use of centralized raters has many benefits, including reducing site burden; however, a specific impact on data quality has not yet been determined. Purpose: The current study has two aims: a) evaluate differences in data quality between administration of a standardized clinical interview completed by centralized raters compared to those completed by site raters and b) evaluate improvement in accuracy of scoring standardized developmental assessments when scored centrally compared to when scored by site raters. Methods: For aim 1, the Vineland-3, a widely used measure of adaptive functioning, was administered by site raters (n= 52) participating in one of four rare disease trials. The measure was also administered as part of two additional trials that utilized central raters (n=7). Each rater completed a comprehensive training program on the assessment. Following completion of the training, each clinician completed a Vineland-3 with a mock caregiver. Administrations were recorded and reviewed by a neuropsychologist for administration and scoring accuracy. Raters were able to certify for the trials after demonstrating an accurate administration of the scale. For site raters, 25% of each rater’s in-study administrations were reviewed by a neuropsychologist for accuracy of administration and scoring. For central raters, the first two administrations and every 10th administration were reviewed. Aim 2 evaluated the added benefit of centralized scoring on the accuracy of scoring of the Bayley-3, a comprehensive developmental assessment widely used in rare neurodevelopmental disorders trials. Bayley-3 administrations across four rare disease trials were centrally scored. For all administrations, the site rater who administered the Bayley-3 scored the scale, and a centralized rater reviewed the video recordings of the administrations and also scored the scales to confirm accuracy. Results: For aim 1, site raters completed 138 Vineland-3 administrations. Of the138 administrations, 53 administrations were reviewed by a neuropsychologist. Four of the administrations had errors that compromised the validity of the assessment. The central raters completed 180 Vineland-3 administrations, 38 administrations were reviewed, and none had significant errors. For aim 2, 68 administrations of the Bayley-3 were reviewed and scored by both a site rater and a centralized rater. Of these administrations, 25 had errors in scoring that were corrected by the central rater. Conclusion: In rare neurodevelopmental disorders trials, sample sizes are often small, so data quality is critical. The use of central raters inherently decreases site burden, but it also decreases rater variance, as illustrated by the small team of central raters (n=7) needed to conduct all of the assessments (n=180) in these trials compared to the number of site raters (n=53) required for even fewer assessments (n=138). In addition, the use of central raters dramatically improves the quality of scoring the assessments.Keywords: neurodevelopmental disorders, clinical trials, rare disease, central raters, remote trials, decentralized trials
Procedia PDF Downloads 172778 Changing Misconceptions in Heat Transfer: A Problem Based Learning Approach for Engineering Students
Authors: Paola Utreras, Yazmina Olmos, Loreto Sanhueza
Abstract:
This work has the purpose of study and incorporate Problem Based Learning (PBL) for engineering students, through the analysis of several thermal images of dwellings located in different geographical points of the Region de los Ríos, Chile. The students analyze how heat is transferred in and out of the houses and how is the relation between heat transfer and climatic conditions that affect each zone. As a result of this activity students are able to acquire significant learning in the unit of heat and temperature, and manage to reverse previous conceptual errors related with energy, temperature and heat. In addition, student are able to generate prototype solutions to increase thermal efficiency using low cost materials. Students make public their results in a report using scientific writing standards and in a science fair open to the entire university community. The methodology used to measure previous Conceptual Errors has been applying diagnostic tests with everyday questions that involve concepts of heat, temperature, work and energy, before the unit. After the unit the same evaluation is done in order that themselves are able to evidence the evolution in the construction of knowledge. As a result, we found that in the initial test, 90% of the students showed deficiencies in the concepts previously mentioned, and in the subsequent test 47% showed deficiencies, these percent ages differ between students who carry out the course for the first time and those who have performed this course previously in a traditional way. The methodology used to measure Significant Learning has been by comparing results in subsequent courses of thermodynamics among students who have received problem based learning and those who have received traditional training. We have observe that learning becomes meaningful when applied to the daily lives of students promoting internalization of knowledge and understanding through critical thinking.Keywords: engineering students, heat flow, problem-based learning, thermal images
Procedia PDF Downloads 231777 Examining the Development of Complexity, Accuracy and Fluency in L2 Learners' Writing after L2 Instruction
Authors: Khaled Barkaoui
Abstract:
Research on second-language (L2) learning tends to focus on comparing students with different levels of proficiency at one point in time. However, to understand L2 development, we need more longitudinal research. In this study, we adopt a longitudinal approach to examine changes in three indicators of L2 ability, complexity, accuracy, and fluency (CAF), as reflected in the writing of L2 learners when writing on different tasks before and after a period L2 instruction. Each of 85 Chinese learners of English at three levels of English language proficiency responded to two writing tasks (independent and integrated) before and after nine months of English-language study in China. Each essay (N= 276) was analyzed in terms of numerous CAF indices using both computer coding and human rating: number of words written, number of errors per 100 words, ratings of error severity, global syntactic complexity (MLS), complexity by coordination (T/S), complexity by subordination (C/T), clausal complexity (MLC), phrasal complexity (NP density), syntactic variety, lexical density, lexical variation, lexical sophistication, and lexical bundles. Results were then compared statistically across tasks, L2 proficiency levels, and time. Overall, task type had significant effects on fluency and some syntactic complexity indices (complexity by coordination, structural variety, clausal complexity, phrase complexity) and lexical density, sophistication, and bundles, but not accuracy. L2 proficiency had significant effects on fluency, accuracy, and lexical variation, but not syntactic complexity. Finally, fluency, frequency of errors, but not accuracy ratings, syntactic complexity indices (clausal complexity, global complexity, complexity by subordination, phrase complexity, structural variety) and lexical complexity (lexical density, variation, and sophistication) exhibited significant changes after instruction, particularly for the independent task. We discuss the findings and their implications for assessment, instruction, and research on CAF in the context of L2 writing.Keywords: second language writing, Fluency, accuracy, complexity, longitudinal
Procedia PDF Downloads 153776 Polarization of Glass with Positive and Negative Charge Carriers
Authors: Valentina V. Zhurikhina, Mihail I. Petrov, Alexandra A. Rtischeva, Mark Dussauze, Thierry Cardinal, Andrey A. Lipovskii
Abstract:
Polarization of glass, often referred to as thermal poling, is a well-known method to modify the glass physical and chemical properties, that manifest themselves in loosing central symmetry of the medium, glass structure and refractive index modification. The usage of the poling for second optical harmonic generation, fabrication of optical waveguides and electrooptic modulators was also reported. Nevertheless, the detailed description of the poling of glasses, containing multiple charge carriers is still under discussion. In particular, the role of possible migration of electrons in the space charge formation usually remains out of the question. In this work, we performed the numerical simulation of thermal poling of a silicate glass, containing Na, K, Mg, and Ca. We took into consideration the contribution of electrons in the polarization process. The possible explanation of migration of electrons can be the break of non-bridging oxygen bonds. It was found, that the modeled depth of the space charge region is about 10 times higher if the migration of the negative charges is taken under consideration. The simulated profiles of cations, participating in the polarization process, are in a good agreement with the experimental data, obtained by glow discharge spectroscopy.Keywords: glass poling, charge transport, modeling, concentration profiles
Procedia PDF Downloads 359775 Stock Market Prediction by Regression Model with Social Moods
Authors: Masahiro Ohmura, Koh Kakusho, Takeshi Okadome
Abstract:
This paper presents a regression model with autocorrelated errors in which the inputs are social moods obtained by analyzing the adjectives in Twitter posts using a document topic model. The regression model predicts Dow Jones Industrial Average (DJIA) more precisely than autoregressive moving-average models.Keywords: stock market prediction, social moods, regression model, DJIA
Procedia PDF Downloads 548774 A Game-Based Methodology to Discriminate Executive Function – a Pilot Study With Institutionalized Elderly People
Authors: Marlene Rosa, Susana Lopes
Abstract:
There are few studies that explore the potential of board games as a performance measure, despite it can be an interesting strategy in the context of frailty populations. In fact, board games are immersive strategies than can inhibit the pressure of being evaluated. This study aimed to test the ability of gamed-base strategies to assess executive function in elderly population. Sixteen old participants were included: 10 with affected executive functions (G1 – 85.30±6.00 yrs old; 10 male); 6 with executive functions with non-clinical important modifications (G2 - 76.30±5.19 yrs old; 6 male). Executive tests were assessed using the Frontal Assessment Battery (FAB), which is a quick-applicable cognitive screening test (score<12 means impairment). The board game used in this study was the TATI Hand Game, specifically for training rhythmic coordination of the upper limbs with multiple cognitive stimuli. This game features 1 table grid, 1 set of Single Game cards (to play with one hand); Double Game cards (to play simultaneously with two hands); 1 dice to plan Single Game mode; cards to plan the Double Game mode; 1 bell; 2 cups. Each participant played 3 single game cards, and the following data were collected: (i) variability in time during board game challenges (SD); (ii) number of errors; (iii) execution speed (sec). G1 demonstrated: high variability in execution time during board game challenges (G1 – 13.0s vs G2- 0.5s); a higher number of errors (1.40 vs 0.67); higher execution velocity (607.80s vs 281.83s). These results demonstrated the potential of implementing board games as a functional assessment strategy in geriatric care. Future studies might include larger samples and statistical methodologies to find cut-off values for impairment in executive functions during performance in TATI game.Keywords: board game, aging, executive function, evaluation
Procedia PDF Downloads 142773 Exploration of RFID in Healthcare: A Data Mining Approach
Authors: Shilpa Balan
Abstract:
Radio Frequency Identification, also popularly known as RFID is used to automatically identify and track tags attached to items. This study focuses on the application of RFID in healthcare. The adoption of RFID in healthcare is a crucial technology to patient safety and inventory management. Data from RFID tags are used to identify the locations of patients and inventory in real time. Medical errors are thought to be a prominent cause of loss of life and injury. The major advantage of RFID application in healthcare industry is the reduction of medical errors. The healthcare industry has generated huge amounts of data. By discovering patterns and trends within the data, big data analytics can help improve patient care and lower healthcare costs. The number of increasing research publications leading to innovations in RFID applications shows the importance of this technology. This study explores the current state of research of RFID in healthcare using a text mining approach. No study has been performed yet on examining the current state of RFID research in healthcare using a data mining approach. In this study, related articles were collected on RFID from healthcare journal and news articles. Articles collected were from the year 2000 to 2015. Significant keywords on the topic of focus are identified and analyzed using open source data analytics software such as Rapid Miner. These analytical tools help extract pertinent information from massive volumes of data. It is seen that the main benefits of adopting RFID technology in healthcare include tracking medicines and equipment, upholding patient safety, and security improvement. The real-time tracking features of RFID allows for enhanced supply chain management. By productively using big data, healthcare organizations can gain significant benefits. Big data analytics in healthcare enables improved decisions by extracting insights from large volumes of data.Keywords: RFID, data mining, data analysis, healthcare
Procedia PDF Downloads 233772 Feedback of an Automated Hospital about the Performance of an Automated Drug Dispensing System’s Implementation
Authors: Bouami Hind, Millot Patrick
Abstract:
The implementation of automated devices in life-critical systems such as hospitals can bring a new set of challenges related to automation malfunctions. While automation has been identified as great leverage for the medication dispensing system’s security and efficiency, it also increases the complexity of the organization. In particular, the installation and operation stage of automated devices can be complex when malfunctions related to automated systems occur. This paper aims to document operators’ situation awareness about the malfunctions of automated drug delivery systems (ADCs) during their implementation through Saint Brieuc hospital’s feedback. Our evaluation approach has been deployed in Saint Brieuc hospital center’s pharmacy, which has been equipped with automated nominative drug dispensing systems since January of 2021. The analysis of Saint Brieuc hospital center pharmacy’s automation revealed numerous malfunctions related to the implementation of Automated Delivery Cabinets. It appears that the targeted performance is not reached in the first year of implementation in this case study. Also, errors have been collected in patients' automated treatments’ production such as lack of drugs in pill boxes or nominative carnets, excess of drugs, wrong location of the drug, drug blister damaged, non-compliant sachet, or ticket errors. Saint Brieuc hospital center’s pharmacy is doing a tremendous job of setting up and monitoring performance indicators from the beginning of automation and throughout ADC’s operation to control ADC’s malfunctions and meet the performance targeted by the hospital. Health professionals, including pharmacists, biomedical engineers and directors of work, technical services and safety, are heavily involved in an automation project. This study highlights the importance of the evaluation of ADCs’ performance throughout the implementation process and the hospital’s team involvement in automation supervision and management.Keywords: life-critical systems, situation awareness, automated delivery cabinets, implementation, risks and malfunctions
Procedia PDF Downloads 99771 The Effects of Applied Negative Bias Voltage on Structure and Optical Properties of a-C:H Films
Authors: X. L. Zhou, S. Tunmee, I. Toda, K. Komatsu, S. Ohshio, H. Saitoh
Abstract:
Hydrogenated amorphous carbon (a-C:H) films have been synthesized by a radio frequency plasma enhanced chemical vapor deposition (rf-PECVD) technique with different bias voltage from 0.0 to -0.5 kV. The Raman spectra displayed the polymer-like hydrogenated amorphous carbon (PLCH) film with 0.0 to -0.1 and a-C:H films with -0.2 to -0.5 kV of bias voltages. The surface chemical information of all films were studied by X-ray photo electron spectroscopy (XPS) technique, presented to C-C (sp2 and sp3) and C-O bonds, and relative carbon (C) and oxygen (O) atomics contents. The O contamination had affected on structure and optical properties. The true density of PLCH and a-C:H films were characterized by X-ray refractivity (XRR) method, showed the result as in the range of 1.16-1.73 g/cm3 that depending on an increasing of bias voltage. The hardness was proportional to the true density of films. In addition, the optical properties i.e. refractive index (n) and extinction coefficient (k) of these films were determined by a spectroscopic ellipsometry (SE) method that give formation to in 1.62-2.10 (n) and 0.04-0.15 (k) respectively. These results indicated that the optical properties confirmed the Raman results as presenting the structure changed with applied bias voltage increased.Keywords: negative bias voltage, a-C:H film, oxygen contamination, optical properties
Procedia PDF Downloads 482770 The Effects of Nanoemulsions Based on Commercial Oils: Sunflower, Canola, Corn, Olive, Soybean, and Hazelnut Oils for the Quality of Farmed Sea Bass at 2±2°C
Authors: Yesim Ozogul, Mustafa Durmuş, Fatih Ozogul, Esmeray Kuley Boğa, Yılmaz Uçar, Hatice Yazgan
Abstract:
The effects of oil-in-water nanoemulsions on the sensory, chemical (total volatile basic nitrogen (TVB-N), thiobarbituric acid (TBA), peroxide value (PV) and free fatty acids (FFA), and microbiological qualities (total viable count (TVC), total psychrophilic bacteria, and total Enterbactericaea bacteria) of sea bream fillets stored at 2 ± 2°C were investigated. Physical properties of emulsions (viscosity, the particle size of droplet, thermodynamic stability, refractive index and surface tension) were determined. The results showed that the use of nanoemulsion extended the shelf life of fish 2 days when compared with the control. Treatment with nanoemulsions significantly (p<0.05) decreased the values of biochemical parameters during storage period. Bacterial growth was inhibited by the use of nanoemulsions. Based on the results, it can be concluded that nanoemulsions based on commercial oils extended the shelf life and improved the quality of sea bass fillets during storage period.Keywords: lipid oxidation, nanoemulsion, sea bass, quality parameters
Procedia PDF Downloads 479769 Working Memory and Audio-Motor Synchronization in Children with Different Degrees of Central Nervous System's Lesions
Authors: Anastasia V. Kovaleva, Alena A. Ryabova, Vladimir N. Kasatkin
Abstract:
Background: The most simple form of entrainment to a sensory (typically auditory) rhythmic stimulus involves perceiving and synchronizing movements with an isochronous beat with one level of periodicity, such as that produced by a metronome. Children with pediatric cancer usually treated with chemo- and radiotherapy. Because of such treatment, psychologists and health professionals declare cognitive and motor abilities decline in cancer patients. The purpose of our study was to measure working memory characteristics with association with audio-motor synchronization tasks, also involved some memory resources, in children with different degrees of central nervous system lesions: posterior fossa tumors, acute lymphoblastic leukemia, and healthy controls. Methods: Our sample consisted of three groups of children: children treated for posterior fossa tumors (PFT-group, n=42, mean age 12.23), children treated for acute lymphoblastic leukemia (ALL-group, n=11, mean age 11.57) and neurologically healthy children (control group, n=36, mean age 11.67). Participants were tested for working memory characteristics with Cambridge Neuropsychological Test Automated Battery (CANTAB). Pattern recognition memory (PRM) and spatial working memory (SWM) tests were applied. Outcome measures of PRM test include the number and percentage of correct trials and latency (speed of participant’s response), and measures of SWM include errors, strategy, and latency. In the synchronization tests, the instruction was to tap out a regular beat (40, 60, 90 and 120 beats per minute) in synchrony with the rhythmic sequences that were played. This meant that for the sequences with an isochronous beat, participants were required to tap into every auditory event. Variations of inter-tap-intervals and deviations of children’s taps from the metronome were assessed. Results: Analysis of variance revealed the significant effect of group (ALL, PFT and control) on such parameters as short-term PRM, SWM strategy and errors. Healthy controls demonstrated more correctly retained elements, better working memory strategy, compared to cancer patients. Interestingly that ALL patients chose the bad strategy, but committed significantly less errors in SWM test then PFT and controls did. As to rhythmic ability, significant associations of working memory were found out only with 40 bpm rhythm: the less variable were inter-tap-intervals of the child, the more elements in memory he/she could retain. The ability to audio-motor synchronization may be related to working memory processes mediated by the prefrontal cortex whereby each sensory event is actively retrieved and monitored during rhythmic sequencing. Conclusion: Our results suggest that working memory, tested with appropriate cognitive methods, is associated with the ability to synchronize movements with rhythmic sounds, especially in sub-second intervals (40 per minute).Keywords: acute lymphoblastic leukemia (ALL), audio-motor synchronization, posterior fossa tumor, working memory
Procedia PDF Downloads 300768 Localized Dynamic Lensing with Extended Depth of Field via Enhanced Light Sound Interaction
Authors: Hamid R. Chabok, Demetrios N. Christodoulides, Mercedeh Khajavikhan
Abstract:
In recent years, acousto-optic (AO) lenses with tunable foci have emerged as a powerful tool for optical beam shaping, imaging, and particle manipulation. In most current AO lenses, the incident light that propagates orthogonally to a standing ultrasonic wave converts to a Bessel-like beam pattern due to the Raman-Nath effect, thus forming annular fringes that result in compromised focus response. Here, we report a new class of AO dynamic lensing based on generating a 3D-variable refractive index profile via a z-axis-scan ultrasound transducer. By utilizing the co- /counter propagation of light and acoustic waves that interact over a longer distance, the laser beam can be strongly focused in a fully controllable manner. Using this approach, we demonstrate AO lenses with instantaneous extended depth of field (DoF) and laterally localized dynamic focusing. This new light-sound interaction scheme may pave the way towards applications that require remote focusing, 3D micromanipulation, and deep tissue therapy/imaging.Keywords: acousto-optic, optical beam shaping, dynamic lensing, ultrasound
Procedia PDF Downloads 101767 Pavement Management for a Metropolitan Area: A Case Study of Montreal
Authors: Luis Amador Jimenez, Md. Shohel Amin
Abstract:
Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization
Procedia PDF Downloads 460766 Lexical-Semantic Deficits in Sinhala Speaking Persons with Post Stroke Aphasia: Evidence from Single Word Auditory Comprehension Task
Authors: D. W. M. S. Samarathunga, Isuru Dharmarathne
Abstract:
In aphasia, various levels of symbolic language processing (semantics) are affected. It is shown that Persons with Aphasia (PWA) often experience more problems comprehending some categories of words than others. The study aimed to determine lexical semantic deficits seen in Auditory Comprehension (AC) and to describe lexical-semantic deficits across six selected word categories. Thirteen (n =13) persons diagnosed with post-stroke aphasia (PSA) were recruited to perform an AC task. Foods, objects, clothes, vehicles, body parts and animals were selected as the six categories. As the test stimuli, black and white line drawings were adapted from a picture set developed for semantic studies by Snodgrass and Vanderwart. A pilot study was conducted with five (n=5) healthy nonbrain damaged Sinhala speaking adults to decide familiarity and applicability of the test material. In the main study, participants were scored based on the accuracy and number of errors shown. The results indicate similar trends of lexical semantic deficits identified in the literature confirming ‘animals’ to be the easiest category to comprehend. Mann-Whitney U test was performed to determine the association between the selected variables and the participants’ performance on AC task. No statistical significance was found between the errors and the type of aphasia reflecting similar patterns described in aphasia literature in other languages. The current study indicates the presence of selectivity of lexical semantic deficits in AC and a hierarchy was developed based on the complexity of the categories to comprehend by Sinhala speaking PWA, which might be clinically beneficial when improving language skills of Sinhala speaking persons with post-stroke aphasia. However, further studies on aphasia should be conducted with larger samples for a longer period to study deficits in Sinhala and other Sri Lankan languages (Tamil and Malay).Keywords: aphasia, auditory comprehension, selective lexical-semantic deficits, semantic categories
Procedia PDF Downloads 253765 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 112764 Evaluation of Correct Usage, Comfort and Fit of Personal Protective Equipment in Construction Work
Authors: Anna-Lisa Osvalder, Jonas Borell
Abstract:
There are several reasons behind the use, non-use, or inadequate use of personal protective equipment (PPE) in the construction industry. Comfort and accurate size support proper use, while discomfort, misfit, and difficulties to understand how the PPEs should be handled inhibit correct usage. The need for several protective equipments simultaneously might also create problems. The purpose of this study was to analyse the correct usage, comfort, and fit of different types of PPEs used for construction work. Correct usage was analysed as guessability, i.e., human perceptions of how to don, adjust, use, and doff the equipment, and if used as intended. The PPEs tested individually or in combinations were a helmet, ear protectors, goggles, respiratory masks, gloves, protective cloths, and safety harnesses. First, an analytical evaluation was performed with ECW (enhanced cognitive walkthrough) and PUEA (predictive use error analysis) to search for usability problems and use errors during handling and use. Then usability tests were conducted to evaluate guessability, comfort, and fit with 10 test subjects of different heights and body constitutions. The tests included observations during donning, five different outdoor work tasks, and doffing. The think-aloud method, short interviews, and subjective estimations were performed. The analytical evaluation showed that some usability problems and use errors arise during donning and doffing, but with minor severity, mostly causing discomfort. A few use errors and usability problems arose for the safety harness, especially for novices, where some could lead to a high risk of severe incidents. The usability tests showed that discomfort arose for all test subjects when using a combination of PPEs, increasing over time. For instance, goggles, together with the face mask, caused pressure, chafing at the nose, and heat rash on the face. This combination also limited sight of vision. The helmet, in combination with the goggles and ear protectors, did not fit well and caused uncomfortable pressure at the temples. No major problems were found with the individual fit of the PPEs. The ear protectors, goggles, and face masks could be adjusted for different head sizes. The guessability for how to don and wear the combination of PPE was moderate, but it took some time to adjust them for a good fit. The guessability was poor for the safety harness; few clues in the design showed how it should be donned, adjusted, or worn on the skeletal bones. Discomfort occurred when the straps were tightened too much. All straps could not be adjusted for somebody's constitutions leading to non-optimal safety. To conclude, if several types of PPEs are used together, discomfort leading to pain is likely to occur over time, which can lead to misuse, non-use, or reduced performance. If people who are not regular users should wear a safety harness correctly, the design needs to be improved for easier interpretation, correct position of the straps, and increased possibilities for individual adjustments. The results from this study can be a base for re-design ideas for PPE, especially when they should be used in combinations.Keywords: construction work, PPE, personal protective equipment, misuse, guessability, usability
Procedia PDF Downloads 87763 An Approach to Solving Some Inverse Problems for Parabolic Equations
Authors: Bolatbek Rysbaiuly, Aliya S. Azhibekova
Abstract:
Problems concerning the interpretation of the well testing results belong to the class of inverse problems of subsurface hydromechanics. The distinctive feature of such problems is that additional information is depending on the capabilities of oilfield experiments. Another factor that should not be overlooked is the existence of errors in the test data. To determine reservoir properties, some inverse problems for parabolic equations were investigated. An approach to solving the inverse problems based on the method of regularization is proposed.Keywords: iterative approach, inverse problem, parabolic equation, reservoir properties
Procedia PDF Downloads 428762 Investigation of Thickness Dependent Optical Properties of Bi₂Sb(₃-ₓ):Te ₓ (where x = 0.1, 0.2, 0.3) Thin Films
Authors: Reena Panchal, Maunik Jani, S. M. Vyas, G. R. Pandya
Abstract:
Group V-VI compounds have a narrow bandgap, which makes them useful in many electronic devices. In bulk form, BiSbTe alloys are semi-metals or semi-conductors. They are used in thermoelectric and thermomagnetic devices, fabrication of ionizing, radiation detectors, LEDs, solid-state electrodes, photosensitive heterostructures, solar cells, ionic batteries, etc. Thin films of Bi₂Sb(₃-ₓ):Tex (where x = 0.1, 0.2, 0.3) of various thicknesses were grown by the thermal evaporation technique on a glass substrate at room temperature under a pressure of 10-₄ mbar for different time periods such as 10s, 15s, and 20s. The thickness of these thin films was also obtained by using the swaneopeol envelop method and compared those values with instrumental values. The optical absorption (%) data of thin films was measured in the wave number range of 650 cm-¹ to 4000 cm-¹. The band gap has been evaluated from these optical absorption data, and the results indicate that absorption occurred by a direct interband transition. It was discovered that when thickness decreased, the band gap increased; this dependency was inversely related to the square of thickness, which is explained by the quantum size effect. Using the values of bandgap, found the values of optical electronegativity (∆χ) and optical refractive index (η) using various relations.Keywords: thin films, band gap, film thickness, optical study, size effect
Procedia PDF Downloads 18761 Analytical Solution of Non–Autonomous Discrete Non-Linear Schrodinger Equation With Saturable Non-Linearity
Authors: Mishu Gupta, Rama Gupta
Abstract:
It has been elucidated here that non- autonomous discrete non-linear Schrödinger equation is associated with saturable non-linearity through photo-refractive media. We have investigated the localized solution of non-autonomous saturable discrete non-linear Schrödinger equations. The similarity transformation has been involved in converting non-autonomous saturable discrete non-linear Schrödinger equation to constant-coefficient saturable discrete non-linear Schrödinger equation (SDNLSE), whose exact solution is already known. By back substitution, the solution of the non-autonomous version has been obtained. We have analysed our solution for the hyperbolic and periodic form of gain/loss term, and interesting results have been obtained. The most important characteristic role is that it helps us to analyse the propagation of electromagnetic waves in glass fibres and other optical wave mediums. Also, the usage of SDNLSE has been seen in tight binding for Bose-Einstein condensates in optical mediums. Even the solutions are interrelated, and its properties are prominently used in various physical aspects like optical waveguides, Bose-Einstein (B-E) condensates in optical mediums, Non-linear optics in photonic crystals, and non-linear kerr–type non-linearity effect and photo refracting medium.Keywords: B-E-Bose-Einstein, DNLSE-Discrete non linear schrodinger equation, NLSE-non linear schrodinger equation, SDNLSE - saturable discrete non linear Schrodinger equation
Procedia PDF Downloads 155760 Effects of Machining Parameters on the Surface Roughness and Vibration of the Milling Tool
Authors: Yung C. Lin, Kung D. Wu, Wei C. Shih, Jui P. Hung
Abstract:
High speed and high precision machining have become the most important technology in manufacturing industry. The surface roughness of high precision components is regarded as the important characteristics of the product quality. However, machining chatter could damage the machined surface and restricts the process efficiency. Therefore, selection of the appropriate cutting conditions is of importance to prevent the occurrence of chatter. In addition, vibration of the spindle tool also affects the surface quality, which implies the surface precision can be controlled by monitoring the vibration of the spindle tool. Based on this concept, this study was aimed to investigate the influence of the machining conditions on the surface roughness and the vibration of the spindle tool. To this end, a series of machining tests were conducted on aluminum alloy. In tests, the vibration of the spindle tool was measured by using the acceleration sensors. The surface roughness of the machined parts was examined using white light interferometer. The response surface methodology (RSM) was employed to establish the mathematical models for predicting surface finish and tool vibration, respectively. The correlation between the surface roughness and spindle tool vibration was also analyzed by ANOVA analysis. According to the machining tests, machined surface with or without chattering was marked on the lobes diagram as the verification of the machining conditions. Using multivariable regression analysis, the mathematical models for predicting the surface roughness and tool vibrations were developed based on the machining parameters, cutting depth (a), feed rate (f) and spindle speed (s). The predicted roughness is shown to agree well with the measured roughness, an average percentage of errors of 10%. The average percentage of errors of the tool vibrations between the measurements and the predictions of mathematical model is about 7.39%. In addition, the tool vibration under various machining conditions has been found to have a positive influence on the surface roughness (r=0.78). As a conclusion from current results, the mathematical models were successfully developed for the predictions of the surface roughness and vibration level of the spindle tool under different cutting condition, which can help to select appropriate cutting parameters and to monitor the machining conditions to achieve high surface quality in milling operation.Keywords: machining parameters, machining stability, regression analysis, surface roughness
Procedia PDF Downloads 231759 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads
Authors: Raja Umer Sajjad, Chang Hee Lee
Abstract:
Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters
Procedia PDF Downloads 240758 Enhancing the Structural, Optical, and Dielectric Properties of the Polymer Nanocomposites Based on Polymer Blend and Gold Nanoparticles for Application in Energy Storage
Authors: Mohammed Omar
Abstract:
Using Chenopodium murale leaf, gold nanoparticles (Au NP's) were biosynthesized effectively in an amicable strategy. The casting process was used to create composite layers of sodium alginate and polyvinyl pyrrolidone. Gold nanoparticles were incorporated into the polyvinyl pyrrolidone (PVP)/ sodium alginate (NaAlg) polymer blend by casting technique. Before and after exposure to different doses of gamma irradiation (2, 4, 6 Mrad), thin films of synthesized nanocomposites were analyzed. XRD revealed the amorphous nature of polymer blends (PVP/ NaAlg), which decreased by both Au NP's embedding and consecutive doses of irradiation. FT-IR spectra revealed interactions and differences within the functional groups of their respective pristine components and dopant nano-fillers. The optical properties of PVP/NaAlg – Au NP thin films (refractive index n, energy gap Eg, Urbach energy Eu) were examined before and after the irradiation procedure. Transmission electron micrographs (TEM) demonstrated a decrease in the size of Au NP’s and narrow size distribution as the gamma irradiation dose was increased. Gamma irradiation was found to influence the electrical conductivity of synthesized composite films, as well as dielectric permittivity (ɛ′) and dielectric losses (ε″).Keywords: PVP, SPR, γ-radiations, XRD
Procedia PDF Downloads 104757 Visibility Measurements Using a Novel Open-Path Optical Extinction Analyzer
Authors: Nabil Saad, David Morgan, Manish Gupta
Abstract:
Visibility has become a key component of air quality and is regulated in many areas by environmental laws such as the EPA Clean Air Act and Regional Haze Rule. Typically, visibility is calculated by estimating the optical absorption and scattering of both gases and aerosols. A major component of the aerosols’ climatic effect is due to their scattering and absorption of solar radiation, which are governed by their optical and physical properties. However, the accurate assessment of this effect on global warming, climate change, and air quality is made difficult due to uncertainties in the calculation of single scattering albedo (SSA). Experimental complications arise in the determination of the single scattering albedo of an aerosol particle since it requires the simultaneous measurement of both scattering and extinction. In fact, aerosol optical absorption, in particular, is a difficult measurement to perform, and it’s often associated with large uncertainties when using filter methods or difference methods. In this presentation, we demonstrate the use of a new open-path Optical Extinction Analyzer (OEA) in conjunction with a nephelometer and two particle sizers, emphasizing the benefits that co-employment of the OEA offers to derive the complex refractive index of aerosols and their single scattering albedo parameter. Various use cases, data reproducibility, and instrument calibration will also be presented to highlight the value proposition of this novel Open-Path OEA.Keywords: aerosols, extinction, visibility, albedo
Procedia PDF Downloads 90756 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 309755 Optical Coherence Tomography in Differentiation of Acute and Non-Healing Wounds
Authors: Ananya Barui, Provas Banerjee, Jyotirmoy Chatterjee
Abstract:
Application of optical technology in medicine and biology has a long track-record. In this endeavor, OCT is able to attract both engineers and biologists to work together in the field of photonics for establishing a striking non-invasive imaging technology. In contrast to other in vivo imaging modalities like Raman imaging, confocal imaging, two-photon microscopy etc. which can perform in vivo imaging upto 100-200 micron depth due to limitation in numerical aperture or scattering, however, OCT can achieve high-resolution imaging upto few millimeters of tissue structures depending on their refractive index in different anatomical location. This tomographic system depends on interference of two light waves in an interferometer to produce a depth profile of specimen. In wound healing, frequent collection of biopsies for follow-up of repair process could be avoided by such imaging technique. Real time skin OCT (the optical biopsy) has efficacy in deeper and faster illumination of cutaneou tissue to acquire high resolution cross sectional images of their internal micro-structure. Swept Source-OCT (SS-OCT), a novel imaging technique, can generate high-speed depth profile (~ 2 mm) of wound at a sweeping rate of laser with micron level resolution and optimum coherent length of 5-6 mm. Normally multi-layered skin tissue depicts different optical properties along with variation in thickness, refractive index and composition (i.e. keratine layer, water, fat etc.) according to their anatomical location. For instance, stratum corneum, the upper-most and relatively dehydrated layer of epidermis reflects more light and produces more lucid and a sharp demarcation line with rest of the hydrated epidermal region. During wound healing or regeneration, optical properties of cutaneous tissue continuously altered with maturation of wound bed. More mature and less hydrated tissue component reflects more light and becomes visible as a brighter area in comparison to immature region which content higher amount water or fat that depicts as a darker area in OCT image. Non-healing wound possess prolonged inflammation and inhibits nascent proliferative stage. Accumulation of necrotic tissues also prevents the repair of non-healing wounds. Due to high resolution and potentiality to reflect the compositional aspects of tissues in terms of their optical properties, this tomographic method may facilitate in differentiating non-healing and acute wounds in addition to clinical observations. Non-invasive OCT offers better insight regarding specific biological status of tissue in health and pathological conditions, OCT images could be associated with histo-pathological ‘gold standard’. This correlated SS-OCT and microscopic evaluation of the wound edges can provide information regarding progressive healing and maturation of the epithelial components. In the context of searching analogy between two different imaging modalities, their relative performances in imaging of healing bed were estimated for probing an alternative approach. Present study validated utility of SS-OCT in revealing micro-anatomic structure in the healing bed with newer information. Exploring precise correspondence of OCT images features with histo-chemical findings related to epithelial integrity of the regenerated tissue could have great implication. It could establish the ‘optical biopsy’ as a potent non-invasive diagnostic tool for cutaneous pathology.Keywords: histo-pathology, non invasive imaging, OCT, wound healing
Procedia PDF Downloads 279754 Astronomical Object Classification
Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan
Abstract:
We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis
Procedia PDF Downloads 78753 A Theoretical Modelling and Simulation of a Surface Plasmon Resonance Biosensor for the Detection of Glucose Concentration in Blood and Urine
Authors: Natasha Mandal, Rakesh Singh Moirangthem
Abstract:
The present work reports a theoretical model to develop a plasmonic biosensor for the detection of glucose concentrations in human blood and urine as the abnormality of glucose label is the major cause of diabetes which becomes a life-threatening disease worldwide. This study is based on the surface plasmon resonance (SPR) sensor applications which is a well-established, highly sensitive, label-free, rapid optical sensing tool. Here we have introduced a sandwich assay of two dielectric spacer layers of MgF2 and BaTiO3which gives better performance compared to commonly used SiO2 and TiO2 dielectric spacers due to their low dielectric loss and higher refractive index. The sensitivity of our proposed sensor was found as 3242 nm/RIU approximately, with an excellent linear response of 0.958, which is higher than the conventional single-layer Au SPR sensor. Further, the sensitivity enhancement is also optimized by coating a few layers of two-dimensional (2D) nanomaterials (e.g., Graphene, h-BN, MXene, MoS2, WS2, etc.) on the sensor chip. Hence, our proposed SPR sensor has the potential for the detection of glucose concentration in blood and urine with enhanced sensitivity and high affinity and could be utilized as a reliable platform for the optical biosensing application in the field of medical diagnosis.Keywords: biosensor, surface plasmon resonance, dielectric spacer, 2D nanomaterials
Procedia PDF Downloads 106