Search results for: measurement errors
2978 Use of In-line Data Analytics and Empirical Model for Early Fault Detection
Authors: Hyun-Woo Cho
Abstract:
Automatic process monitoring schemes are designed to give early warnings for unusual process events or abnormalities as soon as possible. For this end, various techniques have been developed and utilized in various industrial processes. It includes multivariate statistical methods, representation skills in reduced spaces, kernel-based nonlinear techniques, etc. This work presents a nonlinear empirical monitoring scheme for batch type production processes with incomplete process measurement data. While normal operation data are easy to get, unusual fault data occurs infrequently and thus are difficult to collect. In this work, noise filtering steps are added in order to enhance monitoring performance by eliminating irrelevant information of the data. The performance of the monitoring scheme was demonstrated using batch process data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.Keywords: batch process, monitoring, measurement, kernel method
Procedia PDF Downloads 3232977 Real-Time Inventory Management and Operational Efficiency in Manufacturing
Authors: Tom Wanyama
Abstract:
We have developed a weight-based parts inventory monitoring system utilizing the Industrial Internet of Things (IIoT) to enhance operational efficiencies in manufacturing. The system addresses various challenges, including eliminating downtimes caused by stock-outs, preventing human errors in parts delivery and product assembly, and minimizing motion waste by reducing unnecessary worker movements. The system incorporates custom QR codes for simplified inventory tracking and retrieval processes. The generated data serves a dual purpose by enabling real-time optimization of parts flow within manufacturing facilities and facilitating retroactive optimization of stock levels for informed decision-making in inventory management. The pilot implementation at SEPT Learning Factory successfully eradicated data entry errors, optimized parts delivery, and minimized workstation downtimes, resulting in a remarkable increase of over 10% in overall equipment efficiency across all workstations. Leveraging the IIoT features, the system seamlessly integrates information into the process control system, contributing to the enhancement of product quality. This approach underscores the importance of effective tracking of parts inventory in manufacturing to achieve transparency, improved inventory control, and overall profitability. In the broader context, our inventory monitoring system aligns with the evolving focus on optimizing supply chains and maintaining well-managed warehouses to ensure maximum efficiency in the manufacturing industry.Keywords: industrial Internet of things, industrial systems integration, inventory monitoring, inventory control in manufacturing
Procedia PDF Downloads 332976 Finite Element Analysis of Human Tarsals, Meta Tarsals and Phalanges for Predicting probable location of Fractures
Authors: Irfan Anjum Manarvi, Fawzi Aljassir
Abstract:
Human bones have been a keen area of research over a long time in the field of biomechanical engineering. Medical professionals, as well as engineering academics and researchers, have investigated various bones by using medical, mechanical, and materials approaches to discover the available body of knowledge. Their major focus has been to establish properties of these and ultimately develop processes and tools either to prevent fracture or recover its damage. Literature shows that mechanical professionals conducted a variety of tests for hardness, deformation, and strain field measurement to arrive at their findings. However, they considered these results accuracy to be insufficient due to various limitations of tools, test equipment, difficulties in the availability of human bones. They proposed the need for further studies to first overcome inaccuracies in measurement methods, testing machines, and experimental errors and then carry out experimental or theoretical studies. Finite Element analysis is a technique which was developed for the aerospace industry due to the complexity of design and materials. But over a period of time, it has found its applications in many other industries due to accuracy and flexibility in selection of materials and types of loading that could be theoretically applied to an object under study. In the past few decades, the field of biomechanical engineering has also started to see its applicability. However, the work done in the area of Tarsals, metatarsals and phalanges using this technique is very limited. Therefore, present research has been focused on using this technique for analysis of these critical bones of the human body. This technique requires a 3-dimensional geometric computer model of the object to be analyzed. In the present research, a 3d laser scanner was used for accurate geometric scans of individual tarsals, metatarsals, and phalanges from a typical human foot to make these computer geometric models. These were then imported into a Finite Element Analysis software and a length refining process was carried out prior to analysis to ensure the computer models were true representatives of actual bone. This was followed by analysis of each bone individually. A number of constraints and load conditions were applied to observe the stress and strain distributions in these bones under the conditions of compression and tensile loads or their combination. Results were collected for deformations in various axis, and stress and strain distributions were observed to identify critical locations where fracture could occur. A comparative analysis of failure properties of all the three types of bones was carried out to establish which of these could fail earlier which is presented in this research. Results of this investigation could be used for further experimental studies by the academics and researchers, as well as industrial engineers, for development of various foot protection devices or tools for surgical operations and recovery treatment of these bones. Researchers could build up on these models to carryout analysis of a complete human foot through Finite Element analysis under various loading conditions such as walking, marching, running, and landing after a jump etc.Keywords: tarsals, metatarsals, phalanges, 3D scanning, finite element analysis
Procedia PDF Downloads 3292975 An Application of Extreme Value Theory as a Risk Measurement Approach in Frontier Markets
Authors: Dany Ng Cheong Vee, Preethee Nunkoo Gonpot, Noor Sookia
Abstract:
In this paper, we consider the application of Extreme Value Theory as a risk measurement tool. The Value at Risk, for a set of indices, from six Stock Exchanges of Frontier markets is calculated using the Peaks over Threshold method and the performance of the model index-wise is evaluated using coverage tests and loss functions. Our results show that 'fat-tailedness' alone of the data is not enough to justify the use of EVT as a VaR approach. The structure of the returns dynamics is also a determining factor. This approach works fine in markets which have had extremes occurring in the past thus making the model capable of coping with extremes coming up (Colombo, Tunisia and Zagreb Stock Exchanges). On the other hand, we find that indices with lower past than present volatility fail to adequately deal with future extremes (Mauritius and Kazakhstan). We also conclude that using EVT alone produces quite static VaR figures not reflecting the actual dynamics of the data.Keywords: extreme value theory, financial crisis 2008, value at risk, frontier markets
Procedia PDF Downloads 2762974 An Adaptive Controller Method Based on Full-State Linear Model of Variable Cycle Engine
Authors: Jia Li, Huacong Li, Xiaobao Han
Abstract:
Due to the more variable geometry parameters of VCE (variable cycle aircraft engine), presents an adaptive controller method based on the full-state linear model of VCE and has simulated to solve the multivariate controller design problem of the whole flight envelops. First, analyzes the static and dynamic performances of bypass ratio and other state parameters caused by variable geometric components, and develops nonlinear component model of VCE. Then based on the component model, through small deviation linearization of main fuel (Wf), the area of tail nozzle throat (A8) and the angle of rear bypass ejector (A163), setting up multiple linear model which variable geometric parameters can be inputs. Second, designs the adaptive controllers for VCE linear models of different nominal points. Among them, considering of modeling uncertainties and external disturbances, derives the adaptive law by lyapunov function. The simulation results showed that, the adaptive controller method based on full-state linear model used the angle of rear bypass ejector as input and effectively solved the multivariate control problems of VCE. The performance of all nominal points could track the desired closed-loop reference instructions. The adjust time was less than 1.2s, and the system overshoot was less than 1%, at the same time, the errors of steady states were less than 0.5% and the dynamic tracking errors were less than 1%. In addition, the designed controller could effectively suppress interference and reached the desired commands with different external random noise signals.Keywords: variable cycle engine (VCE), full-state linear model, adaptive control, by-pass ratio
Procedia PDF Downloads 3172973 Comparison between Some of Robust Regression Methods with OLS Method with Application
Authors: Sizar Abed Mohammed, Zahraa Ghazi Sadeeq
Abstract:
The use of the classic method, least squares (OLS) to estimate the linear regression parameters, when they are available assumptions, and capabilities that have good characteristics, such as impartiality, minimum variance, consistency, and so on. The development of alternative statistical techniques to estimate the parameters, when the data are contaminated with outliers. These are powerful methods (or resistance). In this paper, three of robust methods are studied, which are: Maximum likelihood type estimate M-estimator, Modified Maximum likelihood type estimate MM-estimator and Least Trimmed Squares LTS-estimator, and their results are compared with OLS method. These methods applied to real data taken from Duhok company for manufacturing furniture, the obtained results compared by using the criteria: Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Sum of Absolute Error (MSAE). Important conclusions that this study came up with are: a number of typical values detected by using four methods in the furniture line and very close to the data. This refers to the fact that close to the normal distribution of standard errors, but typical values in the doors line data, using OLS less than that detected by the powerful ways. This means that the standard errors of the distribution are far from normal departure. Another important conclusion is that the estimated values of the parameters by using the lifeline is very far from the estimated values using powerful methods for line doors, gave LTS- destined better results using standard MSE, and gave the M- estimator better results using standard MAPE. Moreover, we noticed that using standard MSAE, and MM- estimator is better. The programs S-plus (version 8.0, professional 2007), Minitab (version 13.2) and SPSS (version 17) are used to analyze the data.Keywords: Robest, LTS, M estimate, MSE
Procedia PDF Downloads 2322972 An Experimental Study on the Measurement of Fuel to Air Ratio Using Flame Chemiluminescence
Authors: Sewon Kim, Chang Yeop Lee, Minjun Kwon
Abstract:
This study is aiming at establishing the relationship between the optical signal of flame and an equivalent ratio of flame. In this experiment, flame optical signal in a furnace is measured using photodiode. The combustion system which is composed of metal fiber burner and vertical furnace and flame chemiluminescence is measured at various experimental conditions. In this study, the flame chemiluminescence of laminar premixed flame is measured by using commercially available photodiode. It is experimentally investigated the relationship between equivalent ratio and photodiode signal. In addition, The strategy of combustion control method is proposed by using the optical signal and fuel pressure. The results showed that certain relationship between optical data of photodiode and equivalence ratio exists and this leads to the successful application of this system for instantaneous measurement of equivalence ration of the combustion system.Keywords: flame chemiluminescence, photo diode, equivalence ratio, combustion control
Procedia PDF Downloads 3972971 Nonlinear Estimation Model for Rail Track Deterioration
Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami
Abstract:
Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.Keywords: ANFIS, MGT, prediction modeling, rail track degradation
Procedia PDF Downloads 3352970 Fracture Crack Monitoring Using Digital Image Correlation Technique
Authors: B. G. Patel, A. K. Desai, S. G. Shah
Abstract:
The main of objective of this paper is to develop new measurement technique without touching the object. DIC is advance measurement technique use to measure displacement of particle with very high accuracy. This powerful innovative technique which is used to correlate two image segments to determine the similarity between them. For this study, nine geometrically similar beam specimens of different sizes with (steel fibers and glass fibers) and without fibers were tested under three-point bending in a closed loop servo-controlled machine with crack mouth opening displacement control with a rate of opening of 0.0005 mm/sec. Digital images were captured before loading (unreformed state) and at different instances of loading and were analyzed using correlation techniques to compute the surface displacements, crack opening and sliding displacements, load-point displacement, crack length and crack tip location. It was seen that the CMOD and vertical load-point displacement computed using DIC analysis matches well with those measured experimentally.Keywords: Digital Image Correlation, fibres, self compacting concrete, size effect
Procedia PDF Downloads 3892969 Assessing Perinatal Mental Illness during the COVID-19 Pandemic: A Review of Measurement Tools
Authors: Mya Achike
Abstract:
Background and Significance: Perinatal mental illness covers a wide range of conditions and has a huge influence on maternal-child health. Issues and challenges with perinatal mental health have been associated with poor pregnancy, birth, and postpartum outcomes. It is estimated that one out of five new and expectant mothers experience some degree of perinatal mental illness, which makes this a hugely significant health outcome. Certain factors increase the maternal risk for mental illness. Challenges related to poverty, migration, extreme stress, exposure to violence, emergency and conflict situations, natural disasters, and pandemics can exacerbate mental health disorders. It is widely expected that perinatal mental health is being negatively affected during the present COVID-19 pandemic. Methods: A review of studies that reported a measurement tool to assess perinatal mental health outcomes during the COVID-19 pandemic was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. PubMed, CINAHL, and Google Scholar were used to search for peer-reviewed studies published after late 2019, in accordance with the emergence of the virus. The search resulted in the inclusion of ten studies. Approach to measure health outcome: The main approach to measure perinatal mental illness is the use of self-administered, validated questionnaires, usually in the clinical setting. Summary: Widespread use of these tools has afforded the clinical and research communities the ability to identify and support women who may be suffering from mental illness disorders during a pandemic. More research is needed to validate tools in other vulnerable, perinatal populations.Keywords: mental health during covid, perinatal mental health, perinatal mental health measurement tools, perinatal mental health tools
Procedia PDF Downloads 1352968 Dynamic Measurement System Modeling with Machine Learning Algorithms
Authors: Changqiao Wu, Guoqing Ding, Xin Chen
Abstract:
In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.Keywords: dynamic system modeling, neural network, normal equation, second order gradient descent
Procedia PDF Downloads 1272967 Spaces of Interpretation: Personal Space
Authors: Yehuda Roth
Abstract:
In quantum theory, a system’s time evolution is predictable unless an observer performs measurement, as the measurement process can randomize the system. This randomness appears when the measuring device does not accurately describe the measured item, i.e., when the states characterizing the measuring device appear as a superposition of those being measured. When such a mismatch occurs, the measured data randomly collapse into a single eigenstate of the measuring device. This scenario resembles the interpretation process in which the observer does not experience an objective reality but interprets it based on preliminary descriptions initially ingrained into his/her mind. This distinction is the motivation for the present study in which the collapse scenario is regarded as part of the interpretation process of the observer. By adopting the formalism of the quantum theory, we present a complete mathematical approach that describes the interpretation process. We demonstrate this process by applying the proposed interpretation formalism to the ambiguous image "My wife and mother-in-law" to identify whether a woman in the picture is young or old.Keywords: quantum-like interpretation, ambiguous image, determination, quantum-like collapse, classified representation
Procedia PDF Downloads 1042966 Defence Ethics : A Performance Measurement Framework for the Defence Ethics Program
Authors: Allyson Dale, Max Hlywa
Abstract:
The Canadian public expects the highest moral standards from Canadian Armed Forces (CAF) members and Department of National Defence (DND) employees. The Chief, Professional Conduct and Culture (CPCC) stood up in April 2021 with the mission of ensuring that the defence culture and members’ conduct are aligned with the ethical principles and values that the organization aspires towards. The Defence Ethics Program (DEP), which stood up in 1997, is a values-based ethics program for individuals and organizations within the DND/CAF and now falls under CPCC. The DEP is divided into five key functional areas, including policy, communications, collaboration, training and education, and advice and guidance. The main focus of the DEP is to foster an ethical culture within defence so that members and organizations perform to the highest ethical standards. The measurement of organizational ethics is often complex and challenging. In order to monitor whether the DEP is achieving its intended outcomes, a performance measurement framework (PMF) was developed using the Director General Military Personnel Research and Analysis (DGMPRA) PMF development process. This evidence-based process is based on subject-matter expertise from the defence team. The goal of this presentation is to describe each stage of the DGMPRA PMF development process and to present and discuss the products of the DEP PMF (e.g., logic model). Specifically, first, a strategic framework was developed to provide a high-level overview of the strategic objectives, mission, and vision of the DEP. Next, Key Performance Questions were created based on the objectives in the strategic framework. A logic model detailing the activities, outputs (what is produced by the program activities), and intended outcomes of the program were developed to demonstrate how the program works. Finally, Key Performance Indicators were developed based on both the intended outcomes in the logic model and the Key Performance Questions in order to monitor program effectiveness. The Key Performance Indicators measure aspects of organizational ethics such as ethical conduct and decision-making, DEP collaborations, and knowledge and awareness of the Defence Ethics Code while leveraging ethics-related items from multiple DGMPRA surveys where appropriate.Keywords: defence ethics, ethical culture, organizational performance, performance measurement framework
Procedia PDF Downloads 1032965 Mending Broken Fences Policing: Developing the Intelligence-Led/Community-Based Policing Model(IP-CP) and Quality/Quantity/Crime(QQC) Model
Authors: Anil Anand
Abstract:
Despite enormous strides made during the past decade, particularly with the adoption and expansion of community policing, there remains much that police leaders can do to improve police-public relations. The urgency is particularly evident in cities across the United States and Europe where an increasing number of police interactions over the past few years have ignited large, sometimes even national, protests against police policy and strategy, highlighting a gap between what police leaders feel they have archived in terms of public satisfaction, support, and legitimacy and the perception of bias among many marginalized communities. The decision on which one policing strategy is chosen over another, how many resources are allocated, and how strenuously the policy is applied resides primarily with the police and the units and subunits tasked with its enforcement. The scope and opportunity for police officers in impacting social attitudes and social policy are important elements that cannot be overstated. How do police leaders, for instance, decide when to apply one strategy—say community-based policing—over another, like intelligence-led policing? How do police leaders measure performance and success? Should these measures be based on quantitative preferences over qualitative, or should the preference be based on some other criteria? And how do police leaders define, allow, and control discretionary decision-making? Mending Broken Fences Policing provides police and security services leaders with a model based on social cohesion, that incorporates intelligence-led and community policing (IP-CP), supplemented by a quality/quantity/crime (QQC) framework to provide a four-step process for the articulable application of police intervention, performance measurement, and application of discretion.Keywords: social cohesion, quantitative performance measurement, qualitative performance measurement, sustainable leadership
Procedia PDF Downloads 2952964 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying
Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra
Abstract:
Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.Keywords: FT-NIR, pasta, moisture determination, food engineering
Procedia PDF Downloads 2582963 Application Research of Stilbene Crystal for the Measurement of Accelerator Neutron Sources
Authors: Zhao Kuo, Chen Liang, Zhang Zhongbing, Ruan Jinlu. He Shiyi, Xu Mengxuan
Abstract:
Stilbene, C₁₄H₁₂, is well known as one of the most useful organic scintillators for pulse shape discrimination (PSD) technique for its good scintillation properties. An on-line acquisition system and an off-line acquisition system were developed with several CAMAC standard plug-ins, NIM plug-ins, neutron/γ discriminating plug-in named 2160A and a digital oscilloscope with high sampling rate respectively for which stilbene crystals and photomultiplier tube detectors (PMT) as detector for accelerator neutron sources measurement carried out in China Institute of Atomic Energy. Pulse amplitude spectrums and charge amplitude spectrums were real-time recorded after good neutron/γ discrimination whose best PSD figure-of-merits (FoMs) are 1.756 for D-D accelerator neutron source and 1.393 for D-T accelerator neutron source. The probability of neutron events in total events was 80%, and neutron detection efficiency was 5.21% for D-D accelerator neutron sources, which were 50% and 1.44% for D-T accelerator neutron sources after subtracting the background of scattering observed by the on-line acquisition system. Pulse waveform signals were acquired by the off-line acquisition system randomly while the on-line acquisition system working. The PSD FoMs obtained by the off-line acquisition system were 2.158 for D-D accelerator neutron sources and 1.802 for D-T accelerator neutron sources after waveform digitization off-line processing named charge integration method for just 1000 pulses. In addition, the probabilities of neutron events in total events obtained by the off-line acquisition system matched very well with the probabilities of the on-line acquisition system. The pulse information recorded by the off-line acquisition system could be repetitively used to adjust the parameters or methods of PSD research and obtain neutron charge amplitude spectrums or pulse amplitude spectrums after digital analysis with a limited number of pulses. The off-line acquisition system showed equivalent or better measurement effects compared with the online system with a limited number of pulses which indicated a feasible method based on stilbene crystals detectors for the measurement of prompt neutrons neutron sources like prompt accelerator neutron sources emit a number of neutrons in a short time.Keywords: stilbene crystal, accelerator neutron source, neutron / γ discrimination, figure-of-merits, CAMAC, waveform digitization
Procedia PDF Downloads 1862962 Analysis of the Acoustic Performance of Vertical Internal Seals with Pet Wool as NBR 15.575-4NO Green Towers Building-DF
Authors: Lucas Aerre, Wallesson Faria, Roberto Pimentel, Juliana Santos
Abstract:
An extremely disturbing and irritating element in the lives of people and organizations is the noise, the consequences that can bring us has a lot of connection with human health as well as financial and economic aspects. In order to improve the efficiency of buildings in Brazil in general, a performance standard was created, NBR 15.575 in which all buildings are seen in a more systemic and peculiar way, while following the requirements of the standard. The acoustic performance present in these buildings is one such requirement. Based on this, the present work was elaborated with the objective of evaluating through acoustic measurements the acoustic performance of vertical internal fences that are under the incidence of aerial noise of a building in the city of Brasilia-DF. A short theoretical basis is made and soon after the procedures of measurement are described through the control method established by the standard, and its results are evaluated according to the parameters of the same. The measurement performed between rooms of the same unit, presented a standardized sound pressure level difference (D nT, w) equal to 40 dB, thus being classified within the minimum performance required by the standard in question.Keywords: airborne noise, performance standard, soundproofing, vertical seal
Procedia PDF Downloads 2972961 Temperament and Psychopathology in Children of Patients Suffering from Schizophrenia
Authors: Rushi Naaz, Diksha Suchdeva
Abstract:
Background: Temperament is a very important aspect of functioning that needs to be understood in children of patients suffering from schizophrenia. The children of parents with mental disorder have substantially increased risk of psychiatric illness in them and may exhibit a range of problems from minor variations in temperament and adjustment to manifest psychiatric disorder. Method: A case control study was conducted to study the temperament characteristics and psychopathology in children of patients suffering from schizophrenia as compared to those of healthy controls. Both the groups were evaluated on Temperament Measurement Schedule and Childhood Psychopathology Measurement Schedule. Results: The results showed that children of patients suffering from schizophrenia were withdrawing, less adaptable, less sociable and had lower activity level than children of healthy parents. However, on the measure of psychopathology, no significant difference was found. Conclusion: Since temperament can be identified at an early age, children at risk for the disorder later on could be identified early enough for possible primary intervention.Keywords: children, childhood psychopathology, parental psychopathology, psychiatric disorders, schizophrenia, temperament
Procedia PDF Downloads 3722960 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data
Authors: Kai Warsoenke, Maik Mackiewicz
Abstract:
To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.Keywords: automotive production, machine learning, process optimization, smart tolerancing
Procedia PDF Downloads 1152959 On Board Measurement of Real Exhaust Emission of Light-Duty Vehicles in Algeria
Authors: R. Kerbachi, S. Chikhi, M. Boughedaoui
Abstract:
The study presents an analysis of the Algerian vehicle fleet and resultant emissions. The emission measurement of air pollutants emitted by road transportation (CO, THC, NOX and CO2) was conducted on 17 light duty vehicles in real traffic. This sample is representative of the Algerian light vehicles in terms of fuel quality (gasoline, diesel and liquefied petroleum gas) and the technology quality (injection system and emission control). The experimental measurement methodology of unit emission of vehicles in real traffic situation is based on the use of the mini-Constant Volume Sampler for gas sampling and a set of gas analyzers for CO2, CO, NOx and THC, with an instrumentation to measure kinematics, gas temperature and pressure. The apparatus is also equipped with data logging instrument and data transfer. The results were compared with the database of the European light vehicles (Artemis). It was shown that the technological injection liquefied petroleum gas (LPG) has significant impact on air pollutants emission. Therefore, with the exception of nitrogen oxide compounds, uncatalyzed LPG vehicles are more effective in reducing emissions unit of air pollutants compared to uncatalyzed gasoline vehicles. LPG performance seems to be lower under real driving conditions than expected on chassis dynamometer. On the other hand, the results show that uncatalyzed gasoline vehicles emit high levels of carbon monoxide, and nitrogen oxides. Overall, and in the absence of standards in Algeria, unit emissions are much higher than Euro 3. The enforcement of pollutant emission standard in developing countries is an important step towards introducing cleaner technology and reducing vehicular emissions.Keywords: on-board measurements of unit emissions of CO, HC, NOx and CO2, light vehicles, mini-CVS, LPG-fuel, artemis, Algeria
Procedia PDF Downloads 2752958 How Can Personal Protective Equipment Be Best Used and Reused: A Human Factors based Look at Donning and Doffing Procedures
Authors: Devin Doos, Ashley Hughes, Trang Pham, Paul Barach, Rami Ahmed
Abstract:
Over 115,000 Health Care Workers (HCWs) have died from COVID-19, and millions have been infected while caring for patients. HCWs have filed thousands of safety complaints surrounding safety concerns due to Personal Protective Equipment (PPE) shortages, which included concerns around inadequate and PPE reuse. Protocols for donning and doffing PPE remain ambiguous, lacking an evidence-base, and often result in wide deviations in practice. PPE donning and doffing protocol deviations commonly result in self-contamination but have not been thoroughly addressed. No evidence-driven protocols provide guidance on protecting HCW during periods of PPE reuse. Objective: The aim of this study was to examine safety-related threats and risks to Health Care Workers (HCWs) due to the reuse of PPE among Emergency Department personnel. Method: We conducted a prospective observational study to examine the risks of reusing PPE. First, ED personnel were asked to don and doff PPE in a simulation lab. Each participant was asked to don and doff PPE five times, according to the maximum reuse recommendation set by the Centers for Disease Control and Prevention (CDC). Each participant was videorecorded; video recordings were reviewed and coded independently by at least 2 of the 3trained coders for safety behaviors and riskiness of actions. A third coder was brought in when the agreement between the 2 coders could not be reached. Agreement between coders was high (81.9%), and all disagreements (100%) were resolved via consensus. A bowtie risk assessment chart was constructed analyzing the factors that contribute to increased risks HCW are faced with due to PPE use and reuse. Agreement amongst content experts in the field of Emergency Medicine, Human Factors, and Anesthesiology was used to select aspects of health care that both contribute and mitigate risks associated with PPE reuse. Findings: Twenty-eight clinician participants completed five rounds of donning/doffing PPE, yielding 140 PPE donning/doffing sequences. Two emerging threats were associated with behaviors in donning, doffing, and re-using PPE: (i) direct exposure to contaminant, and (ii) transmission/spread of contaminant. Protective behaviors included: hand hygiene, not touching the patient-facing surface of PPE, and ensuring a proper fit and closure of all PPE materials. 100% of participants (n= 28) deviated from the CDC recommended order, and most participants (92.85%, n=26) self-contaminated at least once during reuse. Other frequent errors included failure to tie all ties on the PPE (92.85%, n=26) and failure to wash hands after a contamination event occurred (39.28%, n=11). Conclusions: There is wide variation and regular errors in how HCW don and doffPPE while including in reusing PPE that led to self-contamination. Some errors were deemed “recoverable”, such as hand washing after touching a patient-facing surface to remove the contaminant. Other errors, such as using a contaminated mask and accidentally spreading to the neck and face, can lead to compound risks that are unique to repeated PPE use. A more comprehensive understanding of the contributing threats to HCW safety and complete approach to mitigating underlying risks, including visualizing with risk management toolsmay, aid future PPE designand workflow and space solutions.Keywords: bowtie analysis, health care, PPE reuse, risk management
Procedia PDF Downloads 902957 Evaluation of Egg Quality Parameters in the Isa Brown Line in Intensive Production Systems in the Ocaña Region, Norte de Santander
Authors: Meza-Quintero Myriam, Lobo Torrado Katty Andrea, Sanchez Picon Yesenia, Hurtado-Lugo Naudin
Abstract:
The objective of the study was to evaluate the internal and external quality of the egg in the three production housing systems: floor, cage, and grazing of laying birds of the Isa Brown line, in the laying period between weeks 35 to 41; 135 hens distributed in 3 treatments of 45 birds per repetition were used (the replicas were the seven weeks of the trial). The feeding treatment supplied in the floor and cage systems contained 114 g/bird/day; for the grazing system, 14 grams less concentrate was provided. Nine eggs were collected to be studied and analyzed in the animal nutrition laboratory (3 eggs per housing system). The random statistical model was implemented: for the statistical analysis of the data, the statistical software of IBM® Statistical Products and Services Solution (SPSS) version 2.3 was used. The evaluation and follow-up instruments were the vernier caliper for the measurement in millimeters, a YolkFan™16 from Roche DSM for the evaluation of the egg yolk pigmentation, a digital scale for the measurement in grams, a micrometer for the measurement in millimeters and evaluation in the laboratory using dry matter, ashes, and ethereal extract. The results suggested that equivalent to the size of the egg (0.04 ± 3.55) and the thickness of the shell (0.46 ± 3.55), where P-Value> 0.05 was obtained, weight albumen (0.18 ± 3.55), albumen height (0.38 ± 3.55), yolk weight (0.64 ± 3.55), yolk height (0.54 ± 3.55) and for yolk pigmentation (1.23 ± 3.55). It was concluded that the hens in the three production systems, floor, cage, and grazing, did not show significant statistical differences in the internal and external quality of the chicken in the parameters studied egg for the production system.Keywords: biological, territories, genetic resource, egg
Procedia PDF Downloads 802956 The Impact of Iso 9001 Certification on Brazilian Firms’ Performance: Insights from Multiple Case Studies
Authors: Matheus Borges Carneiro, Fabiane Leticia Lizarelli, José Carlos De Toledo
Abstract:
The evolution of quality management by companies was strongly enabled by, among others, ISO 9001 certification, which is considered a crucial requirement for several customers. Likewise, performance measurement provides useful insights for companies to identify the reflection of their decision-making process on their improvement. One of the most used performance measurement models is the balanced scorecard (BSC), which uses four perspectives to address a firm’s performance: financial, internal process, customer satisfaction, and learning and growth. Studies related to ISO 9001 and business performance have mostly adopted a quantitative approach to identify the standard’s causal effect on a firm’s performance. However, to verify how this influence may occur, an in-depth analysis within a qualitative approach is required. Therefore, this paper aims to verify the impact of ISO 9001:2015 on Brazilian firms’ performance based on the balanced scorecard perspective. Hence, nine certified companies located in the Southeast region of Brazil were studied through a multiple case study approach. Within this study, it was possible to identify the positive impact of ISO 9001 on firms’ overall performance, and four Critical Success Factors (CSFs) were identified as relevant on the linkage among ISO 9001 and firms’ performance: employee involvement, top management, process management, and customer focus. Due to the COVID-19 pandemic, the number of interviews was limited to the quality manager specialist, and the sample was limited since several companies were closed during the period of the study. This study presents an in-depth analysis of how the relationship between ISO 9001 certification and firms’ performance in a developing country is.Keywords: balanced scorecard, Brazilian firms’ performance, critical success factors, ISO 9001 certification, performance measurement
Procedia PDF Downloads 1982955 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 1642954 The Effect of Bath Composition for Hot-Dip Aluminizing of AISI 4140 Steel
Authors: Aptullah Karakas, Murat Baydogan
Abstract:
Hot-dip aluminizing (HDA) is one of the several aluminizing methods to form a wear-, corrosion- and oxidation-resistant aluminide layers on the surface. In this method, the substrate is dipped into a molten aluminum bath, hold in the bath for several minutes, and cooled down to the room temperature in air. A subsequent annealing after the HDA process is generally performed. The main advantage of HDA is its very low investment cost in comparison with other aluminizing methods such as chemical vapor deposition (CVD), pack aluminizing and metalizing. In the HDA process, Al or Al-Si molten baths are mostly used. However, in this study, three different Al alloys such as Al4043 (Al-Mg), Al5356 (Al-Si) and Al7020 (Al-Zn) were used as the molten bath in order to see their effects on morphological and mechanical properties of the resulting aluminide layers. AISI 4140 low alloyed steel was used as the substrate. Parameters of the HDA process were bath composition, bath temperature, and dipping time. These parameters were considered within a Taguchi L9 orthogonal array. After the HDA process and subsequent diffusion annealing, coating thickness measurement, microstructural analysis and hardness measurement of the aluminide layers were conducted. The optimum process parameters were evaluated according to coating morphology, such as cracks, Kirkendall porosity and hardness of the coatings. According to the results, smooth and clean aluminide layer with less Kirkendall porosity and cracks were observed on the sample, which was aluminized in the molten Al7020 bath at 700 C for 10 minutes and subsequently diffusion annealed at 750 C. Hardness of the aluminide layer was in between 1100-1300 HV and the coating thickness was approximately 400 µm. The results were promising such that a hard and thick aluminide layer with less Kirkendall porosity and cracks could be formed. It is, therefore, concluded that Al7020 bath may be used in the HDA process of AISI 4140 steel substrate.Keywords: hot-dip aluminizing, microstructure, hardness measurement, diffusion annealing
Procedia PDF Downloads 762953 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence
Procedia PDF Downloads 3632952 Psychometric Properties of the Eq-5d-3l and Eq-5d-5l Instruments for Health Related Quality of Life Measurement in Indonesian Population
Authors: Dwi Endarti, Susi a Kristina, Rizki Noorizzati, Akbar E Nugraha, Fera Maharani, Kika a Putri, Asninda H Azizah, Sausanzahra Angganisaputri, Yunisa Yustikarini
Abstract:
Cost utility analysis is the most recommended pharmacoeconomic method since it allows widely comparison of cost-effectiveness results from different interventions. The method uses outcome of quality-adjusted life year (QALY) or disability-adjusted life year (DALY). Measurement of QALY requires the data of utility dan life years gained. Utility is measured with the instrument for quality of life measurement such as EQ-5D. Recently, the EQ-5D is available in two versions which are EQ-5D-3L and EQ-5D-5L. This study aimed to compare the EQ-5D-3L and EQ-5D-5L to examine the most suitable version for Indonesian population. This study was an observational study employing cross sectional approach. Data of quality of life measured with EQ-5D-3L and EQ-5D-5L were collected from several groups of population which were respondent with chronic diseases, respondent with acute diseases, and respondent from general population (without illness) in Yogyakarta Municipality, Indonesia. Convenience samples of hypertension patients (83), diabetes mellitus patients (80), and osteoarthritis patients (47), acute respiratory tract infection (81), cephalgia (43), dyspepsia (42), and respondent from general population (293) were recruited in this study. Responses on the 3L and 5L versions of EQ-5D were compared by examining the psychometric properties including agreement, internal consistency, ceiling effect, and convergent validity. Based on psychometric properties tests of EQ-5D-3L dan EQ-5D-5L, EQ-5D-5L tended to have better psychometric properties compared to EQ-5D-3L. Future studies for health related quality of life (HRQOL) measurements for pharmacoeconomic studies in Indonesia should apply EQ-5D-5L.Keywords: EQ-5D, Health Related Quality of Life, Indonesian Population, Psychometric Properties
Procedia PDF Downloads 4772951 A Generalized Sparse Bayesian Learning Algorithm for Near-Field Synthetic Aperture Radar Imaging: By Exploiting Impropriety and Noncircularity
Authors: Pan Long, Bi Dongjie, Li Xifeng, Xie Yongle
Abstract:
The near-field synthetic aperture radar (SAR) imaging is an advanced nondestructive testing and evaluation (NDT&E) technique. This paper investigates the complex-valued signal processing related to the near-field SAR imaging system, where the measurement data turns out to be noncircular and improper, meaning that the complex-valued data is correlated to its complex conjugate. Furthermore, we discover that the degree of impropriety of the measurement data and that of the target image can be highly correlated in near-field SAR imaging. Based on these observations, A modified generalized sparse Bayesian learning algorithm is proposed, taking impropriety and noncircularity into account. Numerical results show that the proposed algorithm provides performance gain, with the help of noncircular assumption on the signals.Keywords: complex-valued signal processing, synthetic aperture radar, 2-D radar imaging, compressive sensing, sparse Bayesian learning
Procedia PDF Downloads 1312950 Software-Defined Radio Based Channel Measurement System of Wideband HF Communication System in Low-Latitude Region
Authors: P. H. Mukti, I. Kurniawati, F. Oktaviansyah, A. D. Adhitya, N. Rachmadani, R. Corputty, G. Hendrantoro, T. Fukusako
Abstract:
HF Communication system is one of the attractive fields among many researchers since it can be reached long-distance areas with low-cost. This long-distance communication can be achieved by exploiting the ionosphere as a transmission medium for the HF radio wave. However, due to the dynamic nature of ionosphere, the channel characteristic of HF communication has to be investigated in order to gives better performances. Many techniques to characterize HF channel are available in the literature. However, none of those techniques describe the HF channel characteristic in low-latitude regions, especially equatorial areas. Since the ionosphere around equatorial region has an ESF phenomenon, it becomes an important investigation to characterize the wideband HF Channel in low-latitude region. On the other sides, the appearance of software-defined radio attracts the interest of many researchers. Accordingly, in this paper a SDR-based channel measurement system is proposed to be used for characterizing the HF channel in low-latitude region.Keywords: channel characteristic, HF communication system, LabVIEW, software-defined radio, universal software radio peripheral
Procedia PDF Downloads 4862949 Optimization of Geometric Parameters of Microfluidic Channels for Flow-Based Studies
Authors: Parth Gupta, Ujjawal Singh, Shashank Kumar, Mansi Chandra, Arnab Sarkar
Abstract:
Microfluidic devices have emerged as indispensable tools across various scientific disciplines, offering precise control and manipulation of fluids at the microscale. Their efficacy in flow-based research, spanning engineering, chemistry, and biology, relies heavily on the geometric design of microfluidic channels. This work introduces a novel approach to optimise these channels through Response Surface Methodology (RSM), departing from the conventional practice of addressing one parameter at a time. Traditionally, optimising microfluidic channels involved isolated adjustments to individual parameters, limiting the comprehensive understanding of their combined effects. In contrast, our approach considers the simultaneous impact of multiple parameters, employing RSM to efficiently explore the complex design space. The outcome is an innovative microfluidic channel that consumes an optimal sample volume and minimises flow time, enhancing overall efficiency. The relevance of geometric parameter optimization in microfluidic channels extends significantly in biomedical engineering. The flow characteristics of porous materials within these channels depend on many factors, including fluid viscosity, environmental conditions (such as temperature and humidity), and specific design parameters like sample volume, channel width, channel length, and substrate porosity. This intricate interplay directly influences the performance and efficacy of microfluidic devices, which, if not optimized, can lead to increased costs and errors in disease testing and analysis. In the context of biomedical applications, the proposed approach addresses the critical need for precision in fluid flow. it mitigate manufacturing costs associated with trial-and-error methodologies by optimising multiple geometric parameters concurrently. The resulting microfluidic channels offer enhanced performance and contribute to a streamlined, cost-effective process for testing and analyzing diseases. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing.Keywords: microfluidic device, minitab, statistical optimization, response surface methodology
Procedia PDF Downloads 68