Search results for: current and angular errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9794

Search results for: current and angular errors

9464 The Mechanical and Electrochemical Properties of DC-Electrodeposited Ni-Mn Alloy Coating with Low Internal Stress

Authors: Chun-Ying Lee, Kuan-Hui Cheng, Mei-Wen Wu

Abstract:

The nickel-manganese (Ni-Mn) alloy coating prepared from DC electrodeposition process in sulphamate bath was studied. The effects of process parameters, such as current density and electrolyte composition, on the cathodic current efficiency, microstructure, internal stress and mechanical properties were investigated. Because of its crucial effect on the application to the electroforming of microelectronic components, the development of low internal stress coating with high leveling power was emphasized. It was found that both the coating’s manganese content and the cathodic current efficiency increased with the raise in current density. In addition, the internal stress of the deposited coating showed compressive nature at low current densities while changed to tensile one at higher current densities. Moreover, the metallographic observation, X-ray diffraction measurement, transmission electron microscope (TEM) examination, and polarization curve measurement were conducted. It was found that the Ni-Mn coating consisted of nano-sized columnar grains and the maximum hardness of the coating was associated with (111) preferred orientation in the microstructure. The grain size was refined along with the increase in the manganese content of the coating, which accordingly, raised its hardness and mechanical tensile strength. In summary, the Ni-Mn coating prepared at lower current density of 1-2 A/dm2 had low internal stress, high leveling power, and better corrosion resistance.

Keywords: Ni-Mn coating, DC plating, internal stress, leveling power

Procedia PDF Downloads 344
9463 True Single SKU Script: Applying the Automated Test to Set Software Properties in a Global Software Development Environment

Authors: Antonio Brigido, Maria Meireles, Francisco Barros, Gaspar Mota, Fernanda Terra, Lidia Melo, Marcelo Reis, Camilo Souza

Abstract:

As the globalization of the software process advances, companies are increasingly committed to improving software development technologies across multiple locations. On the other hand, working with teams distributed in different locations also raises new challenges. In this sense, automated processes can help to improve the quality of process execution. Therefore, this work presents the development of a tool called TSS Script that automates the sample preparation process for carrier requirements validation tests. The objective of the work is to obtain significant gains in execution time and reducing errors in scenario preparation. To estimate the gains over time, the executions performed in an automated and manual way were timed. In addition, a questionnaire-based survey was developed to discover new requirements and improvements to include in this automated support. The results show an average gain of 46.67% of the total hours worked, referring to sample preparation. The use of the tool avoids human errors, and for this reason, it adds greater quality and speed to the process. Another relevant factor is the fact that the tester can perform other activities in parallel with sample preparation.

Keywords: Android, GSD, automated testing tool, mobile products

Procedia PDF Downloads 281
9462 A ZVT-ZCT-PWM DC-DC Boost Converter with Direct Power Transfer

Authors: Naim Suleyman Ting, Yakup Sahin, Ismail Aksoy

Abstract:

This paper presents a zero voltage transition-zero current transition (ZVT-ZCT)-PWM DC-DC boost converter with direct power transfer. In this converter, the main switch turns on with ZVT and turns off with ZCT. The auxiliary switch turns on and off with zero current switching (ZCS). The main diode turns on with ZVS and turns off with ZCS. Besides, the additional current or voltage stress does not occur on the main device. The converter has features as simple structure, fast dynamic response and easy control. Also, the proposed converter has direct power transfer feature as well as excellent soft switching techniques. In this study, the operating principle of the converter is presented and its operation is verified for 1 kW and 100 kHz model.

Keywords: direct power transfer, boost converter, zero-voltage transition, zero-current transition

Procedia PDF Downloads 792
9461 A Double PWM Source Inverter Technique with Reduced Leakage Current for Application on Standalone Systems

Authors: Md.Noman Habib Khan, M. S. Tajul Islam, T. S. Gunawan, M. Hasanuzzaman

Abstract:

The photovoltaic (PV) panel with no galvanic isolation system is well-known technique in the world which is effective and deliver power with enhanced efficiency. The PV generation presented here is for stand-alone system installed in remote areas when as the resulting power gets connected to electronic load installation instead of being tied to the grid. Though very small, even then transformer-less topology is shown to be with leakage in pico-ampere range. By using PWM technique PWM, leakage current in different situations is shown. The results that are demonstrated in this paper show how the pico-ampere current is reduced to femto-ampere through use of inductors and capacitors of suitable values of inductor and capacitors with the load.

Keywords: photovoltaic (PV) panel, duty cycle, pulse duration modulation (PDM), leakage current

Procedia PDF Downloads 512
9460 Laboratory Simulation of Subway Dynamic Stray Current Interference with Cathodically Protected Structures

Authors: Mohammad Derakhshani, Saeed Reza Allahkaram, Michael Isakani-Zakaria, Masoud Samadian, Hojat Sharifi Rasaey

Abstract:

Dynamic stray currents tend to change their magnitude and polarity with time at their source which will create anodic and cathodic spots on a nearby interfered structure. To date, one of the biggest known dynamic stray current sources are DC traction systems. Laboratory simulation is a suitable method to apply theoretical principles in order to identify effective parameters in dynamic stray current influenced corrosion. Simulation techniques can be utilized for various mitigation methods applied in a small scales for selection of the most efficient method with regards to field applications. In this research, laboratory simulation of potential fluctuations caused by dynamic stray current on a cathodically protected structure was investigated. A lab model capable of generating DC static and dynamic stray currents and simulating its effects on cathodically protected samples were developed based on stray current induced (contact-less) polarization technique. Stray current pick-up and discharge spots on an influenced structure were simulated by inducing fluctuations in the sample’s stationary potential. Two mitigation methods for dynamic stray current interference on buried structures namely application of sacrificial anodes as preferred discharge point for the stray current and potentially controlled cathodic protection was investigated. Results showed that the application of sacrificial anodes can be effective in reducing interference only in discharge spot. But cathodic protection through potential controlling is more suitable for mitigating dynamic stray current effects.

Keywords: simulation, dynamic stray current, fluctuating potentials, sacrificial anode

Procedia PDF Downloads 276
9459 AI-Based Technologies for Improving Patient Safety and Quality of Care

Authors: Tewelde Gebreslassie Gebreanenia, Frie Ayalew Yimam, Seada Hussen Adem

Abstract:

Patient safety and quality of care are essential goals of health care delivery, but they are often compromised by human errors, system failures, or resource constraints. In a variety of healthcare contexts, artificial intelligence (AI), a quickly developing field, can provide fresh approaches to enhancing patient safety and treatment quality. Artificial Intelligence (AI) has the potential to decrease errors and enhance patient outcomes by carrying out tasks that would typically require human intelligence. These tasks include the detection and prevention of adverse events, monitoring and warning patients and clinicians about changes in vital signs, symptoms, or risks, offering individualized and evidence-based recommendations for diagnosis, treatment, or prevention, and assessing and enhancing the effectiveness of health care systems and services. This study examines the state-of-the-art and potential future applications of AI-based technologies for enhancing patient safety and care quality, as well as the opportunities and problems they present for patients, policymakers, researchers, and healthcare providers. In order to ensure the safe, efficient, and responsible application of AI in healthcare, the paper also addresses the ethical, legal, social, and technical challenges that must be addressed and regulated.

Keywords: artificial intelligence, health care, human intelligence, patient safty, quality of care

Procedia PDF Downloads 51
9458 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137

Authors: Abdulsalam M. Alhawsawi

Abstract:

Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.

Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137

Procedia PDF Downloads 99
9457 Transient Current Investigations in Liquid Crystalline Polyurethane

Authors: Jitendra Kumar Quamara, Sohan Lal, Pushkar Raj

Abstract:

Electrical conduction behavior of liquid crystalline polyurethane (LCPU) has been investigated under transient conditions in the operating temperature range 50-220°C at various electric fields of 4.35-43.45 kV/cm. The transient currents show the hyperbolic decay character and the decay exponent ∆t (one tenth decay time) dependent on field as well as on temperature. The increase in I0/Is values (where I0 represents the current observed immediately after applying the voltage and Is represents the steady state current) and the variation of mobility at high operating temperatures shows the appearance of mesophase. The origin of transient currents has been attributed to the dipolar nature of carbonyl (C=O) groups in the main chain of LCPU and the trapping charge carriers.

Keywords: electrical conduction, transient current, liquid crystalline polymers, mesophase

Procedia PDF Downloads 247
9456 Current Characteristic of Water Electrolysis to Produce Hydrogen, Alkaline, and Acid Water

Authors: Ekki Kurniawan, Yusuf Nur Jayanto, Erna Sugesti, Efri Suhartono, Agus Ganda Permana, Jaspar Hasudungan, Jangkung Raharjo, Rintis Manfaati

Abstract:

The purpose of this research is to study the current characteristic of the electrolysis of mineral water to produce hydrogen, alkaline water, and acid water. Alkaline and hydrogen water are believed to have health benefits. Alkaline water containing hydrogen can be an anti-oxidant that captures free radicals, which will increase the immune system. In Indonesia, there are two existing types of alkaline water producing equipment, but the installation is complicated, and the price is relatively expensive. The electrolysis process is slow (6-8 hours) since they are locally made using 311 VDC full bridge rectifier power supply. This paper intends to discuss how to make hydrogen and alkaline water by a simple portable mineral water ionizer. This is an electrolysis device that is easy to carry and able to separate ions of mineral water into acidic and alkaline water. With an electric field, positive ions will be attracted to the cathode, while negative ions will be attracted to the anode. The circuit equivalent can be depicted as RLC transient ciruit. The diode component ensures that the electrolytic current is direct current. Switch S divides the switching times t1, t2, and t3. In the first stage up to t1, the electrolytic current increases exponentially, as does the inductor charging current (L). The molecules in drinking water experience magnetic properties. The direction of the dipole ions, which are random in origin, will regularly flare with the direction of the electric field. In the second stage up to t2, the electrolytic current decreases exponentially, just like the charging current of a capacitor (C). In the 3rd stage, start t3 until it tends to be constant, as is the case with the current flowing through the resistor (R).

Keywords: current electrolysis, mineral water, ions, alkaline and acid waters, inductor, capacitor, resistor

Procedia PDF Downloads 78
9455 A Monocular Measurement for 3D Objects Based on Distance Area Number and New Minimize Projection Error Optimization Algorithms

Authors: Feixiang Zhao, Shuangcheng Jia, Qian Li

Abstract:

High-precision measurement of the target’s position and size is one of the hotspots in the field of vision inspection. This paper proposes a three-dimensional object positioning and measurement method using a monocular camera and GPS, namely the Distance Area Number-New Minimize Projection Error (DAN-NMPE). Our algorithm contains two parts: DAN and NMPE; specifically, DAN is a picture sequence algorithm, NMPE is a relatively positive optimization algorithm, which greatly improves the measurement accuracy of the target’s position and size. Comprehensive experiments validate the effectiveness of our proposed method on a self-made traffic sign dataset. The results show that with the laser point cloud as the ground truth, the size and position errors of the traffic sign measured by this method are ± 5% and 0.48 ± 0.3m, respectively. In addition, we also compared it with the current mainstream method, which uses a monocular camera to locate and measure traffic signs. DAN-NMPE attains significant improvements compared to existing state-of-the-art methods, which improves the measurement accuracy of size and position by 50% and 15.8%, respectively.

Keywords: monocular camera, GPS, positioning, measurement

Procedia PDF Downloads 118
9454 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 120
9453 Microstructures and Chemical Compositions of Quarry Dust As Alternative Building Material in Malaysia

Authors: Abdul Murad Zainal Abidin, Tuan Suhaimi Salleh, Siti Nor Azila Khalid, Noryati Mustapa

Abstract:

Quarry dust is a quarry end product from rock crushing processes, which is a concentrated material used as an alternative to fine aggregates for concreting purposes. In quarrying activities, the rocks are crushed into aggregates of varying sizes, from 75mm until less than 4.5 mm, the size of which is categorized as quarry dust. The quarry dust is usually considered as waste and not utilized as a recycled aggregate product. The dumping of the quarry dust at the quarry plant poses the risk of environmental pollution and health hazard. Therefore, the research is an attempt to identify the potential of quarry dust as an alternative building material that would reduce the materials and construction costs, as well as contribute effort in mitigating depletion of natural resources. The objectives are to conduct material characterization and evaluate the properties of fresh and hardened engineering brick with quarry dust mix proportion. The microstructures of quarry dust and the bricks were investigated using scanning electron microscopy (SEM), and the results suggest that the shape and surface texture of quarry dust is a combination of hard and angular formation. The chemical composition of the quarry dust was also evaluated using X-ray fluorescence (XRF) and compared against sand and concrete. The quarry dust was found to have a higher presence of alumina (Al₂O₃), indicating the possibility of an early strength effect for brick. They are utilizing quarry dust waste as replacement material has the potential of conserving non-renewable resources as well as providing a viable alternative to disposal of current quarry waste.

Keywords: building materials, cement replacement, quarry microstructure, quarry product, sustainable materials

Procedia PDF Downloads 155
9452 Time Series Forecasting (TSF) Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window

Procedia PDF Downloads 137
9451 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space

Authors: Nanjiang Chen

Abstract:

In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.

Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation

Procedia PDF Downloads 18
9450 Preliminary Study of the Phonological Development in Three and Four Year Old Bulgarian Children

Authors: Tsvetomira Braynova, Miglena Simonska

Abstract:

The article presents the results of research on phonological processes in three and four-year-old children. For the purpose of the study, an author's test was developed and conducted among 120 children. The study included three areas of research - at the level of words (96 words), at the level of sentence repetition (10 sentences) and at the level of generating own speech from a picture (15 pictures). The test also gives us additional information about the articulation errors of the assessed children. The main purpose of the icing is to analyze all phonological processes that occur at this age in Bulgarian children and to identify which are typical and atypical for this age. The results show that the most common phonology errors that children make are: sound substitution, an elision of sound, metathesis of sound, elision of a syllable, and elision of consonants clustered in a syllable. All examined children were identified with the articulatory disorder from type bilabial lambdacism. Measuring the correlation between the average length of repeated speech and the average length of generated speech, the analysis proves that the more words a child can repeat in part “repeated speech,” the more words they can be expected to generate in part “generating sentence.” The results of this study show that the task of naming a word provides sufficient and representative information to assess the child's phonology.

Keywords: assessment, phonology, articulation, speech-language development

Procedia PDF Downloads 155
9449 Solutions for Quality Pre-Control of Crimp Contacts

Authors: C. F. Ocoleanu, G. Cividjian, Gh. Manolea

Abstract:

In this paper, we present two solutions for connections quality pre-control of Crimp Contacts and to identify in the first moments the connections improperly executed, before final assembly of a electrical machines. The first solution supposed experimental determination of specific losses by calculated the initial rate of temperature rise. This can be made drawing the tangent at the origin at heating curve. The method can be used to identify bad connections by passing a current through the winding at ambient temperature and simultaneously record connections temperatures in the first few minutes since the current is setting. The second proposed solution is to apply to each element crimping a thermal indicator one level, and making a test heating with a heating current corresponding to critical temperature indicator.

Keywords: temperature, crimp contact, thermal indicator, current distribution, specific losses

Procedia PDF Downloads 402
9448 A Numerical Investigation of Total Temperature Probes Measurement Performance

Authors: Erdem Meriç

Abstract:

Measuring total temperature of air flow accurately is a very important requirement in the development phases of many industrial products, including gas turbines and rockets. Thermocouples are very practical devices to measure temperature in such cases, but in high speed and high temperature flows, the temperature of thermocouple junction may deviate considerably from real flow total temperature due to the effects of heat transfer mechanisms of convection, conduction, and radiation. To avoid errors in total temperature measurement, special probe designs which are experimentally characterized are used. In this study, a validation case which is an experimental characterization of a specific class of total temperature probes is selected from the literature to develop a numerical conjugate heat transfer analysis methodology to study the total temperature probe flow field and solid temperature distribution. Validated conjugate heat transfer methodology is used to investigate flow structures inside and around the probe and effects of probe design parameters like the ratio between inlet and outlet hole areas and prob tip geometry on measurement accuracy. Lastly, a thermal model is constructed to account for errors in total temperature measurement for a specific class of probes in different operating conditions. Outcomes of this work can guide experimentalists to design a very accurate total temperature probe and quantify the possible error for their specific case.

Keywords: conjugate heat transfer, recovery factor, thermocouples, total temperature probes

Procedia PDF Downloads 109
9447 Robust ResNets for Chemically Reacting Flows

Authors: Randy Price, Harbir Antil, Rainald Löhner, Fumiya Togashi

Abstract:

Chemically reacting flows are common in engineering applications such as hypersonic flow, combustion, explosions, manufacturing process, and environmental assessments. The number of reactions in combustion simulations can exceed 100, making a large number of flow and combustion problems beyond the capabilities of current supercomputers. Motivated by this, deep neural networks (DNNs) will be introduced with the goal of eventually replacing the existing chemistry software packages with DNNs. The DNNs used in this paper are motivated by the Residual Neural Network (ResNet) architecture. In the continuum limit, ResNets become an optimization problem constrained by an ODE. Such a feature allows the use of ODE control techniques to enhance the DNNs. In this work, DNNs are constructed, which update the species un at the nᵗʰ timestep to uⁿ⁺¹ at the n+1ᵗʰ timestep. Parallel DNNs are trained for each species, taking in uⁿ as input and outputting one component of uⁿ⁺¹. These DNNs are applied to multiple species and reactions common in chemically reacting flows such as H₂-O₂ reactions. Experimental results show that the DNNs are able to accurately replicate the dynamics in various situations and in the presence of errors.

Keywords: chemical reacting flows, computational fluid dynamics, ODEs, residual neural networks, ResNets

Procedia PDF Downloads 98
9446 I Don’t Want to Have to Wait: A Study Into the Origins of Rule Violations at Rail Pedestrian Level Crossings

Authors: James Freeman, Andry Rakotonirainy

Abstract:

Train pedestrian collisions are common and are the most likely to result in severe injuries and fatalities when compared to other types of rail crossing accidents. However, there is limited research that has focused on understanding the reasons why some pedestrians’ break level crossings rules, which limits the development of effective countermeasures. As a result, this study undertook a deeper exploration into the origins of risky pedestrian behaviour through structured interviews. A total of 40 pedestrians who admitted to either intentionally breaking crossing rules or making crossing errors participated in an in-depth telephone interview. Qualitative analysis was undertaken via thematic analysis that revealed participants were more likely to report deliberately breaking rules (rather than make errors), particular after the train had passed the crossing as compared to before it arrives. Predominant reasons for such behaviours were identified to be: calculated risk taking, impatience, poor knowledge of rules and low likelihood of detection. The findings have direct implications for the development of effective countermeasures to improve crossing safety (and managing risk) such as increasing surveillance and transit officer presence, as well as installing appropriate barriers that either deter or incapacitate pedestrians from violating crossing rules. This paper will further outline the study findings in regards to the development of countermeasures as well as provide direction for future research efforts in this area.

Keywords: crossings, mistakes, risk, violations

Procedia PDF Downloads 392
9445 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 16
9444 Transient Signal Generator For Fault Indicator Testing

Authors: Mohamed Shaban, Ali Alfallah

Abstract:

This paper describes an application for testing of a fault indicator but it could be used for other network protection testing. The application is created in the LabVIEW environment and consists of three parts. The first part of the application is determined for transient phenomenon generation and imitates voltage and current transient signal at ground fault originate. The second part allows to set sequences of trend for each current and voltage output signal, up to six trends for each phase. The last part of the application generates harmonic signal with continuously controllable amplitude of current or voltage output signal and phase shift of each signal can be changed there. Further any sub-harmonics and upper harmonics can be added to selected current output signal

Keywords: signal generator-fault indicator, harmonic signal generator, voltage output

Procedia PDF Downloads 473
9443 Wrong Site Surgery Should Not Occur In This Day And Age!

Authors: C. Kuoh, C. Lucas, T. Lopes, I. Mechie, J. Yoong, W. Yoong

Abstract:

For all surgeons, there is one preventable but still highly occurring complication – wrong site surgeries. They can have potentially catastrophic, irreversible, or even fatal consequences on patients. With the exponential development of microsurgery and the use of advanced technological tools, the consequences of operating on the wrong side, anatomical part, or even person is seen as the most visible and destructive of all surgical errors and perhaps the error that is dreaded by most clinicians as it threatens their licenses and arouses feelings of guilt. Despite the implementation of the WHO surgical safety checklist more than a decade ago, the incidence of wrong-site surgeries remains relatively high, leading to tremendous physical and psychological repercussions for the clinicians involved, as well as a financial burden for the healthcare institution. In this presentation, the authors explore various factors which can lead to wrong site surgery – a combination of environmental and human factors and evaluate their impact amongst patients, practitioners, their families, and the medical industry. Major contributing factors to these “never events” include deviations from checklists, excessive workload, and poor communication. Two real-life cases are discussed, and systems that can be implemented to prevent these errors are highlighted alongside lessons learnt from other industries. The authors suggest that reinforcing speaking-up, implementing medical professional trainings, and higher patient’s involvements can potentially improve safety in surgeries and electrosurgeries.

Keywords: wrong side surgery, never events, checklist, workload, communication

Procedia PDF Downloads 159
9442 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions

Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal

Abstract:

We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.

Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport

Procedia PDF Downloads 418
9441 Induced Pulsation Attack Against Kalman Filter Driven Brushless DC Motor Control System

Authors: Yuri Boiko, Iluju Kiringa, Tet Yeap

Abstract:

We use modeling and simulation tools, to introduce a novel bias injection attack, named the ’Induced Pulsation Attack’, which targets Cyber Physical Systems with closed-loop controlled Brushless DC (BLDC) motor and Kalman filter driver in the feedback loop. This attack involves engaging a linear function with a constant gradient to distort the coefficient of the injected bias, which falsifies the Kalman filter estimates of the rotor’s angular speed. As a result, this manipulation interaction inside the control system causes periodic pulsations in a form of asymmetric sine wave of both current and voltage in the circuit windings, with a high magnitude. It is shown that by varying the gradient of linear function, one can control both the frequency and structure of the induced pulsations. It is also demonstrated that terminating the attack at any point leads to additional compensating effort from the controller to restore the speed to its equilibrium value. This compensation effort produces an exponentially decaying wave, which we call the ’attack withdrawal syndrome’ wave. The conditions for maximizing or minimizing the impact of the attack withdrawal syndrome are determined. Linking the termination of the attack to the end of the full period of the induced pulsation wave has been shown to nullify the attack withdrawal syndrome wave, thereby improving the attack’s covertness.

Keywords: cyber-attack, induced pulsation, bias injection, Kalman filter, BLDC motor, control system, closed loop, P- controller, PID-controller, saw-function, cyber-physical system

Procedia PDF Downloads 48
9440 Voltage and Current Control of Microgrid in Grid Connected and Islanded Modes

Authors: Megha Chavda, Parth Thummar, Rahul Ghetia

Abstract:

This paper presents the voltage and current control of microgrid accompanied by the synchronization of microgrid with the main utility grid in both islanded and grid-connected modes. Distributed Energy Resources (DERs) satisfy the wide-spread power demand of consumer by behaving as a micro source for a low voltage (LV) grid or microgrid. Synchronization of the microgrid with the main utility grid is done using PLL and PWM gate pulse generation technique is used for the Voltage Source Converter. Potential Function method achieves the voltage and current control of this microgrid in both islanded and grid-connected modes. A low voltage grid consisting of three distributed generators (DG) is considered for the study and is simulated in time-domain using PSCAD/EMTDC software. The simulation results depict the appropriateness of voltage and current control of microgrid and synchronization of microgrid with the medium voltage (MV) grid.

Keywords: microgrid, distributed energy resources, voltage and current control, voltage source converter, pulse width modulation, phase locked loop

Procedia PDF Downloads 391
9439 Estimation of Normalized Glandular Doses Using a Three-Layer Mammographic Phantom

Authors: Kuan-Jen Lai, Fang-Yi Lin, Shang-Rong Huang, Yun-Zheng Zeng, Po-Chieh Hsu, Jay Wu

Abstract:

The normalized glandular dose (DgN) estimates the energy deposition of mammography in clinical practice. The Monte Carlo simulations frequently use uniformly mixed phantom for calculating the conversion factor. However, breast tissues are not uniformly distributed, leading to errors of conversion factor estimation. This study constructed a three-layer phantom to estimated more accurate of normalized glandular dose. In this study, MCNP code (Monte Carlo N-Particles code) was used to create the geometric structure. We simulated three types of target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh), six voltages (25 ~ 35 kVp), six HVL parameters and nine breast phantom thicknesses (2 ~ 10 cm) for the three-layer mammographic phantom. The conversion factor for 25%, 50% and 75% glandularity was calculated. The error of conversion factors compared with the results of the American College of Radiology (ACR) was within 6%. For Rh/Rh, the difference was within 9%. The difference between the 50% average glandularity and the uniform phantom was 7.1% ~ -6.7% for the Mo/Mo combination, voltage of 27 kVp, half value layer of 0.34 mmAl, and breast thickness of 4 cm. According to the simulation results, the regression analysis found that the three-layer mammographic phantom at 0% ~ 100% glandularity can be used to accurately calculate the conversion factors. The difference in glandular tissue distribution leads to errors of conversion factor calculation. The three-layer mammographic phantom can provide accurate estimates of glandular dose in clinical practice.

Keywords: Monte Carlo simulation, mammography, normalized glandular dose, glandularity

Procedia PDF Downloads 167
9438 Neuropsychological Aspects in Adolescents Victims of Sexual Violence with Post-Traumatic Stress Disorder

Authors: Fernanda Mary R. G. Da Silva, Adriana C. F. Mozzambani, Marcelo F. Mello

Abstract:

Introduction: Sexual assault against children and adolescents is a public health problem with serious consequences on their quality of life, especially for those who develop post-traumatic stress disorder (PTSD). The broad literature in this research area points to greater losses in verbal learning, explicit memory, speed of information processing, attention and executive functioning in PTSD. Objective: To compare the neuropsychological functions of adolescents from 14 to 17 years of age, victims of sexual violence with PTSD with those of healthy controls. Methodology: Application of a neuropsychological battery composed of the following subtests: WASI vocabulary and matrix reasoning; Digit subtests (WISC-IV); verbal auditory learning test RAVLT; Spatial Span subtest of the WMS - III scale; abbreviated version of the Wisconsin test; concentrated attention test - D2; prospective memory subtest of the NEUPSILIN scale; five-digit test - FDT and the Stroop test (Trenerry version) in adolescents with a history of sexual violence in the previous six months, referred to the Prove (Violence Care and Research Program of the Federal University of São Paulo), for further treatment. Results: The results showed a deficit in the word coding process in the RAVLT test, with impairment in A3 (p = 0.004) and A4 (p = 0.016) measures, which compromises the verbal learning process (p = 0.010) and the verbal recognition memory (p = 0.012), seeming to present a worse performance in the acquisition of verbal information that depends on the support of the attentional system. A worse performance was found in list B (p = 0.047), a lower priming effect p = 0.026, that is, lower evocation index of the initial words presented and less perseveration (p = 0.002), repeated words. Therefore, there seems to be a failure in the creation of strategies that help the mnemonic process of retention of the verbal information necessary for learning. Sustained attention was found to be impaired, with greater loss of setting in the Wisconsin test (p = 0.023), a lower rate of correct responses in stage C of the Stroop test (p = 0.023) and, consequently, a higher index of erroneous responses in C of the Stroop test (p = 0.023), besides more type II errors in the D2 test (p = 0.008). A higher incidence of total errors was observed in the reading stage of the FDT test p = 0.002, which suggests fatigue in the execution of the task. Performance is compromised in executive functions in the cognitive flexibility ability, suggesting a higher index of total errors in the alternating step of the FDT test (p = 0.009), as well as a greater number of persevering errors in the Wisconsin test (p = 0.004). Conclusion: The data from this study suggest that sexual violence and PTSD cause significant impairment in the neuropsychological functions of adolescents, evidencing risk to quality of life in stages that are fundamental for the development of learning and cognition.

Keywords: adolescents, neuropsychological functions, PTSD, sexual violence

Procedia PDF Downloads 110
9437 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model

Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.

Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma

Procedia PDF Downloads 58
9436 Postpartum Depression and Its Association with Food Insecurity and Social Support among Women in Post-Conflict Northern Uganda

Authors: Kimton Opiyo, Elliot M. Berry, Patil Karamchand, Barnabas K. Natamba

Abstract:

Background: Postpartum depression (PPD) is a major psychiatric disorder that affects women soon after birth and in some cases, is a continuation of antenatal depression. Food insecurity (FI) and social support (SS) are known to be associated with major depressive disorder, and vice versa. This study was conducted to examine the interrelationships among FI, SS, and PPD among postpartum women in Gulu, a post-conflict region in Uganda. Methods: Cross-sectional data from postpartum women on depression symptoms, FI and SS were, respectively, obtained using the Center for Epidemiologic Studies-Depression (CES-D) scale, Individually Focused FI Access scale (IFIAS) and Duke-UNC functional social support scale. Standard regression methods were used to assess associations among FI, SS, and PPD. Results: A total of 239 women were studied, and 40% were found to have any PPD, i.e., with depressive symptom scores of ≥ 17. The mean ± standard deviation (SD) for FI score and SS scores were 6.47 ± 5.02 and 19.11 ± 4.23 respectively. In adjusted analyses, PPD symptoms were found to be positively associated with FI (unstandardized beta and standardized beta of 0.703 and 0.432 respectively, standard errors =0.093 and p-value < 0.0001) and negatively associated with SS (unstandardized beta and standardized beta of -0.263 and -0.135 respectively, standard errors = 0.111 and p-value = 0.019). Conclusions: Many women in this post-conflict region reported experiencing PPD. In addition, this data suggest that food security and psychosocial support interventions may help mitigate women’s experience of PPD or its severity.

Keywords: postpartum depression, food insecurity, social support, post-conflict region

Procedia PDF Downloads 143
9435 Pressure-Robust Approximation for the Rotational Fluid Flow Problems

Authors: Medine Demir, Volker John

Abstract:

Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.

Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces

Procedia PDF Downloads 37