Search results for: refractive errors
872 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method
Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David
Abstract:
Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height
Procedia PDF Downloads 209871 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio
Authors: Urvee B. Trivedi, U. D. Dalal
Abstract:
As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)
Procedia PDF Downloads 345870 Extraction of Aromatic Hydrocarbons from Lub Oil Using Sursurfactant as Additive
Authors: Izza Hidaya, Korichi Mourad
Abstract:
Solvent extraction is an affective method for reduction of aromatic content of lube oil. Frequently with phenol, furfural, NMP(N-methyl pyrrolidone). The solvent power and selectivity can be further increased by using surfactant as additive which facilitate phase separation and to increase raffinate yield. The aromatics in lube oil were extracted at different temperatures (ranging from 333.15 to 343.15K) and different concentration of surfactant (ranging from 0.01 to 0.1% wt).The extraction temperature and the amount of sulfate lauryl éther de sodium In phenoll were investigated systematically in order to determine their optimum values. The amounts of aromatic, paraffinic and naphthenic compounds were determined using ASTM standards by measuring refractive index (RI), viscosity, molecular weight and sulfur content. It was found that using 0,01%wt. surfactant at 343.15K yields the optimum extraction conditions.Keywords: extraction, lubricating oil, aromatics, hydrocarbons
Procedia PDF Downloads 521869 Synthesis and Characterizations of Lead-free BaO-Doped TeZnCaB Glass Systems for Radiation Shielding Applications
Authors: Rezaul K. Sk., Mohammad Ashiq, Avinash K. Srivastava
Abstract:
The use of radiation shielding technology ranging from EMI to high energy gamma rays in various areas such as devices, medical science, defense, nuclear power plants, medical diagnostics etc. is increasing all over the world. However, exposure to different radiations such as X-ray, gamma ray, neutrons and EMI above the permissible limits is harmful to living beings, the environment and sensitive laboratory equipment. In order to solve this problem, there is a need to develop effective radiation shielding materials. Conventionally, lead and lead-based materials are used in making shielding materials, as lead is cheap, dense and provides very effective shielding to radiation. However, the problem associated with the use of lead is its toxic nature and carcinogenic. So, to overcome these drawbacks, there is a great need for lead-free radiation shielding materials and that should also be economically sustainable. Therefore, it is necessary to look for the synthesis of radiation-shielding glass by using other heavy metal oxides (HMO) instead of lead. The lead-free BaO-doped TeZnCaB glass systems have been synthesized by the traditional melt-quenching method. X-ray diffraction analysis confirmed the glassy nature of the synthesized samples. The densities of the developed glass samples were increased by doping the BaO concentration, ranging from 4.292 to 4.725 g/cm3. The vibrational and bending modes of the BaO-doped glass samples were analyzed by Raman spectroscopy, and FTIR (Fourier-transform infrared spectroscopy) was performed to study the functional group present in the samples. UV-visible characterization revealed the significance of optical parameters such as Urbach’s energy, refractive index and optical energy band gap. The indirect and direct energy band gaps were decreased with the BaO concentration whereas the refractive index was increased. X-ray attenuation measurements were performed to determine the radiation shielding parameters such as linear attenuation coefficient (LAC), mass attenuation coefficient (MAC), half value layer (HVL), tenth value layer (TVL), mean free path (MFP), attenuation factor (Att%) and lead equivalent thickness of the lead-free BaO-doped TeZnCaB glass system. It was observed that the radiation shielding characteristics were enhanced with the addition of BaO content in the TeZnCaB glass samples. The glass samples with higher contents of BaO have the best attenuation performance. So, it could be concluded that the addition of BaO into TeZnCaB glass samples is a significant technique to improve the radiation shielding performance of the glass samples. The best lead equivalent thickness was 2.626 mm, and these glasses could be good materials for medical diagnostics applications.Keywords: heavy metal oxides, lead-free, melt-quenching method, x-ray attenuation
Procedia PDF Downloads 31868 Computer Assisted Strategies Help to Pharmacist
Authors: Komal Fizza
Abstract:
All around the world in every field professionals are taking great support from their computers. Computer assisted strategies not only increase the efficiency of the professionals but also in case of healthcare they help in life-saving interventions. The background of this current research is aimed towards two things; first to find out if computer assisted strategies are useful for Pharmacist for not and secondly how much these assist a Pharmacist to do quality interventions. Shifa International Hospital is a 500 bedded hospital, and it is running Antimicrobial Stewardship, during their stewardship rounds pharmacists observed that a lot of wrong doses of antibiotics were coming at times those were being overlooked by the other pharmacist even. So, with the help of MIS team the patients were categorized into adult and peads depending upon their age. Minimum and maximum dose of every single antibiotic present in the pharmacy that could be dispensed to the patient was developed. These were linked to the order entry window. So whenever pharmacist would type any order and the dose would be below or above the therapeutic limit this would give an alert to the pharmacist. Whenever this message pop-up this was recorded at the back end along with the antibiotic name, pharmacist ID, date, and time. From 14th of January 2015 and till 14th of March 2015 the software stopped different users 350 times. Out of this 300 were found to be major errors which if reached to the patient could have harmed them to the greater extent. While 50 were due to typing errors and minor deviations. The pilot study showed that computer assisted strategies can be of great help to the pharmacist. They can improve the efficacy and quality of interventions.Keywords: antibiotics, computer assisted strategies, pharmacist, stewardship
Procedia PDF Downloads 490867 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini
Abstract:
A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.Keywords: central Italy, extreme events, rainfall data, underestimation errors
Procedia PDF Downloads 191866 Phase Transition of Aqueous Ternary (THF + Polyvinylpyrrolidone + H2O) System as Revealed by Terahertz Time-Domain Spectroscopy
Authors: Hyery Kang, Dong-Yeun Koh, Yun-Ho Ahn, Huen Lee
Abstract:
Determination of the behavior of clathrate hydrate with inhibitor in the THz region will provide useful information about hydrate plug control in the upstream of the oil and gas industry. In this study, terahertz time-domain spectroscopy (THz-TDS) revealed the inhibition of the THF clathrate hydrate system with dosage of polyvinylpyrrolidone (PVP) with three different molecular weights. Distinct footprints of phase transition in the THz region (0.4–2.2 THz) were analyzed and absorption coefficients and real part of refractive indices are obtained in the temperature range of 253 K to 288 K. Along with the optical properties, ring breathing and stretching modes for different molecular weights of PVP in THF hydrate are analyzed by Raman spectroscopy.Keywords: clathrate hydrate, terahertz spectroscopy, tetrahydrofuran, inhibitor
Procedia PDF Downloads 339865 Efficient Principal Components Estimation of Large Factor Models
Authors: Rachida Ouysse
Abstract:
This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting
Procedia PDF Downloads 150864 Formulation of a Stress Management Program for Human Error Prevention in Nuclear Power Plants
Authors: Hyeon-Kyo Lim, Tong-il Jang, Yong-Hee Lee
Abstract:
As for any nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. Thus, for accident prevention, it is quite indispensable to analyze and to manage the influence of any factor which may raise the possibility of human errors. Among lots factors, stress has been reported to have significant influence on human performance. Stress level of a person may fluctuate over time. To handle the possibility over time, robust stress management program is required, especially in nuclear power plants. Therefore, to overcome the possibility of human errors, this study aimed to develop a stress management program as a part of Fitness-for-Duty (FFD) Program for the workers in nuclear power plants. The meaning of FFD might be somewhat different by research objectives, appropriate definition of FFD was accomplished in this study with special reference to human error prevention, and diverse stress factors were elicited for management of human error susceptibility. In addition, with consideration of conventional FFD management programs, appropriate tests and interventions were introduced over the whole employment cycle including selection and screening of workers, job allocation, job rotation, and disemployment as well as Employee-Assistance-Program (EAP). The results showed that most tools mainly concentrated their weights on common organizational factors such as Demands, Supports, and Relationships in sequence, which were referred as major stress factors.Keywords: human error, accident prevention, work performance, stress, fatigue
Procedia PDF Downloads 326863 Bruch’s Membrane Opening in High Myopia and Its Correlation with Axial Length
Authors: Sanjeeb Kumar Mishra, Aartee Jha, Madhu Thapa, Pragati Gautam
Abstract:
Introduction: High myopia has become a matter of global concern as it is a major risk factor for glaucoma. Various optic nerve head changes occur in high myopia over time. This might lead to difficulty in detecting pathologies associated with high myopia through conventional funduscopy examinations only. Bruch’s Membrane Opening (Area and Minimum Rim Width) is considered an anatomically more accurate and reliable landmark than the conventional clinical disc margin. Study Design: It was a hospital based cross-sectional and non-interventional type of study. Purpose: The purpose of our study was to measure Bruch’s Membrane Opening (area and Minimum Rim Width) in high myopic eyes and correlate it with axial length. Methods: A cross-sectional study was conducted at B.P Koirala Lions Center for Ophthalmic Studies, a tertiary-level eye center in Nepal. 80 eyes of 40 subjects (40% male and 60% female) aged 18-35 years with high myopia (Spherical Equivalent (SE) ≥ -6D) were taken as cases. Among them, RE of 39 and LE of 34 myopic subjects were included in the study. Spectral Domain-Optical Coherence Tomography of both the eyes of myopic patients was performed using Glaucoma Module Premiere Edition (GMPE) with Anatomic Positioning System (APS) to measure Bruch’s Membrane Opening (Area and Minimum Rim Width). Axial length in myopic patients was measured using Partial Coherence Interferometry (IOL Master). Results: Among 40 myopic subjects, 16 (40%) were males, whereas 24 (60%) were females. The mean age of myopic subjects was 24.64 ± 5.10 years, with minimum and maximum ages of 18 years and 35 years, respectively. The mean BMO area was 2.28 0.48 mm² in right eye and 2.15 0.59 mm² in left eye. BMO area in high myopic patient was significantly correlated with axial length. The correlation analysis of BMO area with axial length in RE and LE was found to be statistically significant at (r=0.465, p<0.003) and (r=0.374, p< 0.029), respectively. Likewise, the mean BMO-MRW was 325.69 ± 96µm in right eye and 339.20 ± 79.50µm in left eye. There was a significant correlation of BMO-MRW with axial length in both the eyes of myopic subjects. Moreover, a significant negative correlation of Inferior temporal, Nasal, and Inferior nasal quadrants (p<0.05) of BMO-MRW of right eye was found with axial length of right eye, whereas all the BMO-MRW quadrants of left eye were negatively correlated (p<0.05) with axial length in left eye. No significant differences were found between right eye and left eye on comparing means of refractive error, axial length, BMO area, and BMO-MRW. Conclusion: From this study, it can be concluded that BMO area enlarges in high myopia with an increase in axial length. Additionally, BMO-MRW thinning occurs along with the BMO enlargement and increases with axial length. There were no significant differences in refractive error, axial length, BMO area, and BMO-MRW between right eye and left eye.Keywords: high myopia, Bruch’s membrane opening, Bruch’s membrane opening minimum rim width, spectral domain optical coherence tomography
Procedia PDF Downloads 13862 Attention and Memory in the Music Learning Process in Individuals with Visual Impairments
Authors: Lana Burmistrova
Abstract:
Introduction: The influence of visual impairments on several cognitive processes used in the music learning process is an increasingly important area in special education and cognitive musicology. Many children have several visual impairments due to the refractive errors and irreversible inhibitors. However, based on the compensatory neuroplasticity and functional reorganization, congenitally blind (CB) and early blind (EB) individuals use several areas of the occipital lobe to perceive and process auditory and tactile information. CB individuals have greater memory capacity, memory reliability, and less false memory mechanisms are used while executing several tasks, they have better working memory (WM) and short-term memory (STM). Blind individuals use several strategies while executing tactile and working memory n-back tasks: verbalization strategy (mental recall), tactile strategy (tactile recall) and combined strategies. Methods and design: The aim of the pilot study was to substantiate similar tendencies while executing attention, memory and combined auditory tasks in blind and sighted individuals constructed for this study, and to investigate attention, memory and combined mechanisms used in the music learning process. For this study eight (n=8) blind and eight (n=8) sighted individuals aged 13-20 were chosen. All respondents had more than five years music performance and music learning experience. In the attention task, all respondents had to identify pitch changes in tonal and randomized melodic pairs. The memory task was based on the mismatch negativity (MMN) proportion theory: 80 percent standard (not changed) and 20 percent deviant (changed) stimuli (sequences). Every sequence was named (na-na, ra-ra, za-za) and several items (pencil, spoon, tealight) were assigned for each sequence. Respondents had to recall the sequences, to associate them with the item and to detect possible changes. While executing the combined task, all respondents had to focus attention on the pitch changes and had to detect and describe these during the recall. Results and conclusion: The results support specific features in CB and EB, and similarities between late blind (LB) and sighted individuals. While executing attention and memory tasks, it was possible to observe the tendency in CB and EB by using more precise execution tactics and usage of more advanced periodic memory, while focusing on auditory and tactile stimuli. While executing memory and combined tasks, CB and EB individuals used passive working memory to recall standard sequences, active working memory to recall deviant sequences and combined strategies. Based on the observation results, assessment of blind respondents and recording specifics, following attention and memory correlations were identified: reflective attention and STM, reflective attention and periodic memory, auditory attention and WM, tactile attention and WM, auditory tactile attention and STM. The results and the summary of findings highlight the attention and memory features used in the music learning process in the context of blindness, and the tendency of the several attention and memory types correlated based on the task, strategy and individual features.Keywords: attention, blindness, memory, music learning, strategy
Procedia PDF Downloads 184861 Designing a Dispersion Flattened Single Mode PCF for E-Band to U-Band with Less Effective Area
Authors: Shabbir Chowdhury
Abstract:
A signal is broadened when it is gone through a channel, this phenomenon is known as dispersion. And dispersion is different for different wavelength. So bandwidth become limited. Research have tried to design an optical fiber with flattened dispersion to use more bandwidth and also for wavelength division multiplexing. In this paper, a single mode photonic crystal fiber with a flattened dispersion and less effective area has been proposed where silica is used as fiber materials. The effective dispersion varies from -1.996 to 0.1783 [ps/(nm-km)] for enter E-band to U-band. This fiber will take only 3.048 [micrometer^2] (for 1.75 micrometer wavelength). Silica is being used as the fiber material.Keywords: photonic crystal fiber, dispersion, bandwidth, chromatic dispersion, effective dispersion, dispersion compensation, effective area, effective refractive index
Procedia PDF Downloads 415860 Multi-Scale Control Model for Network Group Behavior
Authors: Fuyuan Ma, Ying Wang, Xin Wang
Abstract:
Social networks have become breeding grounds for the rapid spread of rumors and malicious information, posing threats to societal stability and causing significant public harm. Existing research focuses on simulating the spread of information and its impact on users through propagation dynamics and applies methods such as greedy approximation strategies to approximate the optimal control solution at the global scale. However, the greedy strategy at the global scale may fall into locally optimal solutions, and the approximate simulation of information spread may accumulate more errors. Therefore, we propose a multi-scale control model for network group behavior, introducing individual and group scales on top of the greedy strategy’s global scale. At the individual scale, we calculate the propagation influence of nodes based on their structural attributes to alleviate the issue of local optimality. At the group scale, we conduct precise propagation simulations to avoid introducing cumulative errors from approximate calculations without increasing computational costs. Experimental results on three real-world datasets demonstrate the effectiveness of our proposed multi-scale model in controlling network group behavior.Keywords: influence blocking maximization, competitive linear threshold model, social networks, network group behavior
Procedia PDF Downloads 21859 True Single SKU Script: Applying the Automated Test to Set Software Properties in a Global Software Development Environment
Authors: Antonio Brigido, Maria Meireles, Francisco Barros, Gaspar Mota, Fernanda Terra, Lidia Melo, Marcelo Reis, Camilo Souza
Abstract:
As the globalization of the software process advances, companies are increasingly committed to improving software development technologies across multiple locations. On the other hand, working with teams distributed in different locations also raises new challenges. In this sense, automated processes can help to improve the quality of process execution. Therefore, this work presents the development of a tool called TSS Script that automates the sample preparation process for carrier requirements validation tests. The objective of the work is to obtain significant gains in execution time and reducing errors in scenario preparation. To estimate the gains over time, the executions performed in an automated and manual way were timed. In addition, a questionnaire-based survey was developed to discover new requirements and improvements to include in this automated support. The results show an average gain of 46.67% of the total hours worked, referring to sample preparation. The use of the tool avoids human errors, and for this reason, it adds greater quality and speed to the process. Another relevant factor is the fact that the tester can perform other activities in parallel with sample preparation.Keywords: Android, GSD, automated testing tool, mobile products
Procedia PDF Downloads 317858 Sol-Gel Synthesis and Optical Characterisation of TiO2 Thin Films for Photovoltaic Application
Authors: Arabi Nour El Houda, Iratni Aicha, Talaighil Razika, Bruno Capoen, Mohamed Bouazaoui
Abstract:
TiO2 thin films have been prepared by the sol-gel dip-coating technique in order to elaborate antireflective thin films for monocrystalline silicon (mono-Si). The titanium isopropoxyde was chosen as a precursor with hydrochloric acid as a catalyser for preparing a stable solution. The optical properties have been tailored with varying the solution concentration, the withdrawn speed, and the heat-treatment. We showed that using a TiO2 single layer with 64.5 nm in thickness, heat-treated at 450°C or 300°C reduces the mono-Si reflection at a level lower than 3% over the broadband spectral do mains [669-834] nm and [786-1006] nm respectively. Those latter performances are similar to the ones obtained with double layers of low and high refractive index glasses respectively.Keywords: thin film, dip-coating, mono-crystalline silicon, titanium oxide
Procedia PDF Downloads 438857 AI-Based Technologies for Improving Patient Safety and Quality of Care
Authors: Tewelde Gebreslassie Gebreanenia, Frie Ayalew Yimam, Seada Hussen Adem
Abstract:
Patient safety and quality of care are essential goals of health care delivery, but they are often compromised by human errors, system failures, or resource constraints. In a variety of healthcare contexts, artificial intelligence (AI), a quickly developing field, can provide fresh approaches to enhancing patient safety and treatment quality. Artificial Intelligence (AI) has the potential to decrease errors and enhance patient outcomes by carrying out tasks that would typically require human intelligence. These tasks include the detection and prevention of adverse events, monitoring and warning patients and clinicians about changes in vital signs, symptoms, or risks, offering individualized and evidence-based recommendations for diagnosis, treatment, or prevention, and assessing and enhancing the effectiveness of health care systems and services. This study examines the state-of-the-art and potential future applications of AI-based technologies for enhancing patient safety and care quality, as well as the opportunities and problems they present for patients, policymakers, researchers, and healthcare providers. In order to ensure the safe, efficient, and responsible application of AI in healthcare, the paper also addresses the ethical, legal, social, and technical challenges that must be addressed and regulated.Keywords: artificial intelligence, health care, human intelligence, patient safty, quality of care
Procedia PDF Downloads 78856 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137
Authors: Abdulsalam M. Alhawsawi
Abstract:
Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137
Procedia PDF Downloads 116855 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 143854 Investigation about Structural and Optical Properties of Bulk and Thin Film of 1H-CaAlSi by Density Functional Method
Authors: M. Babaeipour, M. Vejdanihemmat
Abstract:
Optical properties of bulk and thin film of 1H-CaAlSi for two directions (1,0,0) and (0,0,1) were studied. The calculations are carried out by Density Functional Theory (DFT) method using full potential. GGA approximation was used to calculate exchange-correlation energy. The calculations are performed by WIEN2k package. The results showed that the absorption edge is shifted backward 0.82eV in the thin film than the bulk for both directions. The static values of the real part of dielectric function for four cases were obtained. The static values of the refractive index for four cases are calculated too. The reflectivity graphs have shown an intensive difference between the reflectivity of the thin film and the bulk in the ultraviolet region.Keywords: 1H-CaAlSi, absorption, bulk, optical, thin film
Procedia PDF Downloads 518853 Examining the Changes in Complexity, Accuracy, and Fluency in Japanese L2 Writing Over an Academic Semester
Authors: Robert Long
Abstract:
The results of a one-year study on the evolution of complexity, accuracy, and fluency (CAF) in the compositions of Japanese L2 university students throughout a semester are presented in this study. One goal was to determine if any improvement in writing abilities over this academic term had occurred, while another was to examine methods of editing. Participants had 30 minutes to write each essay with an additional 10 minutes allotted for editing. As for editing, participants were divided into two groups, one of which utilized an online grammar checker, while the other half self-edited their initial manuscripts. From the three different institutions, there was a total of 159 students. Research questions focused on determining if the CAF had evolved over the previous year, identifying potential variations in editing techniques, and describing the connections between the CAF dimensions. According to the findings, there was some improvement in accuracy (fewer errors) in all three of the measures), whereas there was a marked decline in complexity and fluency. As for the second research aim relating to the interaction among the three dimensions (CAF) and of possible increases in fluency being offset by decreases in grammatical accuracy, results showed (there is a logical high correlation with clauses and word counts, and mean length of T-unit (MLT) and (coordinate phrase of T-unit (CP/T) as well as MLT and clause per T-unit (C/T); furthermore, word counts and error/100 ratio correlated highly with error-free clause totals (EFCT). Issues of syntactical complexity had a negative correlation with EFCT, indicating that more syntactical complexity relates to decreased accuracy. Concerning a difference in error correction between those who self-edited and those who used an online grammar correction tool, results indicated that the variable of errors-free clause ratios (EFCR) had the greatest difference regarding accuracy, with fewer errors noted with writers using an online grammar checker. As for possible differences between the first and second (edited) drafts regarding CAF, results indicated there were positive changes in accuracy, the most significant change seen in complexity (CP/T and MLT), while there were relatively insignificant changes in fluency. Results also indicated significant differences among the three institutions, with Fujian University of Technology having the most fluency and accuracy. These findings suggest that to raise students' awareness of their overall writing development, teachers should support them in developing more complex syntactic structures, improving their fluency, and making more effective use of online grammar checkers.Keywords: complexity, accuracy, fluency, writing
Procedia PDF Downloads 39852 Design of a Compact Herriott Cell for Heat Flux Measurement Applications
Authors: R. G. Ramírez-Chavarría, C. Sánchez-Pérez, V. Argueta-Díaz
Abstract:
In this paper we present the design of an optical device based on a Herriott multi-pass cell fabricated on a small sized acrylic slab for heat flux measurements using the deflection of a laser beam propagating inside the cell. The beam deflection is produced by the heat flux conducted to the acrylic slab due to a gradient in the refractive index. The use of a long path cell as the sensitive element in this measurement device, gives the possibility of high sensitivity within a small size device. We present the optical design as well as some experimental results in order to validate the device’s operation principle.Keywords: heat flux, Herriott cell, optical beam deflection, thermal conductivity
Procedia PDF Downloads 656851 Time Series Forecasting (TSF) Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window
Procedia PDF Downloads 154850 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space
Authors: Nanjiang Chen
Abstract:
In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation
Procedia PDF Downloads 46849 Preliminary Study of the Phonological Development in Three and Four Year Old Bulgarian Children
Authors: Tsvetomira Braynova, Miglena Simonska
Abstract:
The article presents the results of research on phonological processes in three and four-year-old children. For the purpose of the study, an author's test was developed and conducted among 120 children. The study included three areas of research - at the level of words (96 words), at the level of sentence repetition (10 sentences) and at the level of generating own speech from a picture (15 pictures). The test also gives us additional information about the articulation errors of the assessed children. The main purpose of the icing is to analyze all phonological processes that occur at this age in Bulgarian children and to identify which are typical and atypical for this age. The results show that the most common phonology errors that children make are: sound substitution, an elision of sound, metathesis of sound, elision of a syllable, and elision of consonants clustered in a syllable. All examined children were identified with the articulatory disorder from type bilabial lambdacism. Measuring the correlation between the average length of repeated speech and the average length of generated speech, the analysis proves that the more words a child can repeat in part “repeated speech,” the more words they can be expected to generate in part “generating sentence.” The results of this study show that the task of naming a word provides sufficient and representative information to assess the child's phonology.Keywords: assessment, phonology, articulation, speech-language development
Procedia PDF Downloads 186848 Efficient Model Order Reduction of Descriptor Systems Using Iterative Rational Krylov Algorithm
Authors: Muhammad Anwar, Ameen Ullah, Intakhab Alam Qadri
Abstract:
This study presents a technique utilizing the Iterative Rational Krylov Algorithm (IRKA) to reduce the order of large-scale descriptor systems. Descriptor systems, which incorporate differential and algebraic components, pose unique challenges in Model Order Reduction (MOR). The proposed method partitions the descriptor system into polynomial and strictly proper parts to minimize approximation errors, applying IRKA exclusively to the strictly adequate component. This approach circumvents the unbounded errors that arise when IRKA is directly applied to the entire system. A comparative analysis demonstrates the high accuracy of the reduced model and a significant reduction in computational burden. The reduced model enables more efficient simulations and streamlined controller designs. The study highlights IRKA-based MOR’s effectiveness in optimizing complex systems’ performance across various engineering applications. The proposed methodology offers a promising solution for reducing the complexity of large-scale descriptor systems while maintaining their essential characteristics and facilitating their analysis, simulation, and control design.Keywords: model order reduction, descriptor systems, iterative rational Krylov algorithm, interpolatory model reduction, computational efficiency, projection methods, H₂-optimal model reduction
Procedia PDF Downloads 31847 A Numerical Investigation of Total Temperature Probes Measurement Performance
Authors: Erdem Meriç
Abstract:
Measuring total temperature of air flow accurately is a very important requirement in the development phases of many industrial products, including gas turbines and rockets. Thermocouples are very practical devices to measure temperature in such cases, but in high speed and high temperature flows, the temperature of thermocouple junction may deviate considerably from real flow total temperature due to the effects of heat transfer mechanisms of convection, conduction, and radiation. To avoid errors in total temperature measurement, special probe designs which are experimentally characterized are used. In this study, a validation case which is an experimental characterization of a specific class of total temperature probes is selected from the literature to develop a numerical conjugate heat transfer analysis methodology to study the total temperature probe flow field and solid temperature distribution. Validated conjugate heat transfer methodology is used to investigate flow structures inside and around the probe and effects of probe design parameters like the ratio between inlet and outlet hole areas and prob tip geometry on measurement accuracy. Lastly, a thermal model is constructed to account for errors in total temperature measurement for a specific class of probes in different operating conditions. Outcomes of this work can guide experimentalists to design a very accurate total temperature probe and quantify the possible error for their specific case.Keywords: conjugate heat transfer, recovery factor, thermocouples, total temperature probes
Procedia PDF Downloads 134846 Physico-Chemical, GC-MS Analysis and Cold Saponification of Onion (Allium cepa L) Seed Oil
Authors: A. A Warra, S. Fatima
Abstract:
The experimental investigation revealed that the hexane extract of onion seed oil has acid value, iodine value, peroxide value, saponification value, relative density and refractive index of 0.03±0.01 mgKOH/g, 129.80±0.21 gI2/100g, 3.00± 0.00 meq H2O2 203.00±0.71 mgKOH/g, 0.82±0.01and 1.44±0.00 respectively. The percentage yield was 50.28±0.01%. The colour of the oil was light green. We restricted our GC-MS spectra interpretation to compounds identification, particularly fatty acids and they are identified as palmitic acid, linolelaidic acid, oleic acid, stearic acid, behenic acid, linolenic acid and eicosatetraenoic acid. The pH , foam ability (cm³), total fatty matter, total alkali and percentage chloride of the onion oil soap were 11.03± 0.02, 75.13±0.15 (cm³), 36.66 ± 0.02 %, 0.92 ± 0.02% and 0.53 ± 0.15 % respectively. The texture was soft and the colour was lighter green. The results indicated that the hexane extract of the onion seed oil has potential for cosmetic industries.Keywords: onion seeds, soxhlet extraction, physicochemical, GC-MS, cold saponification
Procedia PDF Downloads 316845 I Don’t Want to Have to Wait: A Study Into the Origins of Rule Violations at Rail Pedestrian Level Crossings
Authors: James Freeman, Andry Rakotonirainy
Abstract:
Train pedestrian collisions are common and are the most likely to result in severe injuries and fatalities when compared to other types of rail crossing accidents. However, there is limited research that has focused on understanding the reasons why some pedestrians’ break level crossings rules, which limits the development of effective countermeasures. As a result, this study undertook a deeper exploration into the origins of risky pedestrian behaviour through structured interviews. A total of 40 pedestrians who admitted to either intentionally breaking crossing rules or making crossing errors participated in an in-depth telephone interview. Qualitative analysis was undertaken via thematic analysis that revealed participants were more likely to report deliberately breaking rules (rather than make errors), particular after the train had passed the crossing as compared to before it arrives. Predominant reasons for such behaviours were identified to be: calculated risk taking, impatience, poor knowledge of rules and low likelihood of detection. The findings have direct implications for the development of effective countermeasures to improve crossing safety (and managing risk) such as increasing surveillance and transit officer presence, as well as installing appropriate barriers that either deter or incapacitate pedestrians from violating crossing rules. This paper will further outline the study findings in regards to the development of countermeasures as well as provide direction for future research efforts in this area.Keywords: crossings, mistakes, risk, violations
Procedia PDF Downloads 415844 Wrong Site Surgery Should Not Occur In This Day And Age!
Authors: C. Kuoh, C. Lucas, T. Lopes, I. Mechie, J. Yoong, W. Yoong
Abstract:
For all surgeons, there is one preventable but still highly occurring complication – wrong site surgeries. They can have potentially catastrophic, irreversible, or even fatal consequences on patients. With the exponential development of microsurgery and the use of advanced technological tools, the consequences of operating on the wrong side, anatomical part, or even person is seen as the most visible and destructive of all surgical errors and perhaps the error that is dreaded by most clinicians as it threatens their licenses and arouses feelings of guilt. Despite the implementation of the WHO surgical safety checklist more than a decade ago, the incidence of wrong-site surgeries remains relatively high, leading to tremendous physical and psychological repercussions for the clinicians involved, as well as a financial burden for the healthcare institution. In this presentation, the authors explore various factors which can lead to wrong site surgery – a combination of environmental and human factors and evaluate their impact amongst patients, practitioners, their families, and the medical industry. Major contributing factors to these “never events” include deviations from checklists, excessive workload, and poor communication. Two real-life cases are discussed, and systems that can be implemented to prevent these errors are highlighted alongside lessons learnt from other industries. The authors suggest that reinforcing speaking-up, implementing medical professional trainings, and higher patient’s involvements can potentially improve safety in surgeries and electrosurgeries.Keywords: wrong side surgery, never events, checklist, workload, communication
Procedia PDF Downloads 184843 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions
Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal
Abstract:
We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport
Procedia PDF Downloads 442