Search results for: continuous measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4817

Search results for: continuous measurement

4307 Corrosion Inhibition of Copper in 1M HNO3 Solution by Oleic Acid

Authors: S. Nigri, R. Oumeddour, F. Djazi

Abstract:

The inhibition of the corrosion of copper in 1 M HNO3 solution by oleic acid was investigated by weight loss measurement, potentiodynamic polarization and scanning electron microscope (SEM) studies. The experimental results have showed that this compound revealed a good corrosion inhibition and the inhibition efficiency is increased with the inhibitor concentration to reach 98%. The results obtained revealed that the adsorption of the inhibitor molecule onto metal surface is found to obey Langmuir adsorption isotherm. The temperature effect on the corrosion behavior of copper in 1 M HNO3 without and with inhibitor at different concentration was studied in the temperature range from 303 to 333 K and the kinetic parameters activation such as Ea, ∆Ha and ∆Sa were evaluated. Tafel plot analysis revealed that oleic acid acts as a mixed type inhibitor. SEM analysis substantiated the formation of protective layer over the copper surface.

Keywords: oleic acid, weight loss, electrochemical measurement, SEM analysis

Procedia PDF Downloads 395
4306 Modular Robotics and Terrain Detection Using Inertial Measurement Unit Sensor

Authors: Shubhakar Gupta, Dhruv Prakash, Apoorv Mehta

Abstract:

In this project, we design a modular robot capable of using and switching between multiple methods of propulsion and classifying terrain, based on an Inertial Measurement Unit (IMU) input. We wanted to make a robot that is not only intelligent in its functioning but also versatile in its physical design. The advantage of a modular robot is that it can be designed to hold several movement-apparatuses, such as wheels, legs for a hexapod or a quadpod setup, propellers for underwater locomotion, and any other solution that may be needed. The robot takes roughness input from a gyroscope and an accelerometer in the IMU, and based on the terrain classification from an artificial neural network; it decides which method of propulsion would best optimize its movement. This provides the bot with adaptability over a set of terrains, which means it can optimize its locomotion on a terrain based on its roughness. A feature like this would be a great asset to have in autonomous exploration or research drones.

Keywords: modular robotics, terrain detection, terrain classification, neural network

Procedia PDF Downloads 145
4305 Location Detection of Vehicular Accident Using Global Navigation Satellite Systems/Inertial Measurement Units Navigator

Authors: Neda Navidi, Rene Jr. Landry

Abstract:

Vehicle tracking and accident recognizing are considered by many industries like insurance and vehicle rental companies. The main goal of this paper is to detect the location of a car accident by combining different methods. The methods, which are considered in this paper, are Global Navigation Satellite Systems/Inertial Measurement Units (GNSS/IMU)-based navigation and vehicle accident detection algorithms. They are expressed by a set of raw measurements, which are obtained from a designed integrator black box using GNSS and inertial sensors. Another concern of this paper is the definition of accident detection algorithm based on its jerk to identify the position of that accident. In fact, the results convinced us that, even in GNSS blockage areas, the position of the accident could be detected by GNSS/INS integration with 50% improvement compared to GNSS stand alone.

Keywords: driver behavior monitoring, integration, IMU, GNSS, monitoring, tracking

Procedia PDF Downloads 234
4304 Analysis of Advancements in Process Modeling and Reengineering at Fars Regional Electric Company, Iran

Authors: Mohammad Arabi

Abstract:

Business Process Reengineering (BPR) is a systematic approach to fundamentally redesign organizational processes to achieve significant improvements in organizational performance. At Fars Regional Electric Company, implementing BPR is deemed essential to increase productivity, reduce costs, and improve service quality. This article examines how BPR can help enhance the performance of Fars Regional Electric Company. The objective of this research is to evaluate and analyze the advancements in process modeling and reengineering at Fars Regional Electric Company and to provide solutions for improving the productivity and efficiency of organizational processes. This study aims to demonstrate how BPR can be used to improve organizational processes and enhance the overall performance of the company. This research employs both qualitative and quantitative research methods and includes interviews with senior managers and experts at Fars Regional Electric Company. The analytical tools include process modeling software such as Bizagi and ARIS, and statistical analysis software such as SPSS and Minitab. Data analysis was conducted using advanced statistical methods. The results indicate that the use of BPR techniques can lead to a significant reduction in process execution time and overall improvement in quality. Implementing BPR at Fars Regional Electric Company has led to increased productivity, reduced costs, and improved overall performance of the company. This study shows that with proper implementation of BPR and the use of modeling tools, the company can achieve significant improvements in its processes. Recommendations: (1) Continuous Training for Staff: Invest in continuous training of staff to enhance their skills and knowledge in BPR. (2) Use of Advanced Technologies: Utilize modeling and analysis software to improve processes. (3) Implementation of Effective Management Systems: Employ knowledge and information management systems to enhance organizational performance. (4) Continuous Monitoring and Review of Processes: Regularly review and revise processes to ensure ongoing improvements. This article highlights the importance of improving organizational processes at Fars Regional Electric Company and recommends that managers and decision-makers at the company seriously consider reengineering processes and utilizing modeling technologies to achieve developmental goals and continuous improvement.

Keywords: business process reengineering, electric company, Fars province, process modeling advancements

Procedia PDF Downloads 49
4303 Low-Voltage and Low-Power Bulk-Driven Continuous-Time Current-Mode Differentiator Filters

Authors: Ravi Kiran Jaladi, Ezz I. El-Masry

Abstract:

Emerging technologies such as ultra-wide band wireless access technology that operate at ultra-low power present several challenges due to their inherent design that limits the use of voltage-mode filters. Therefore, Continuous-time current-mode (CTCM) filters have become very popular in recent times due to the fact they have a wider dynamic range, improved linearity, and extended bandwidth compared to their voltage-mode counterparts. The goal of this research is to develop analog filters which are suitable for the current scaling CMOS technologies. Bulk-driven MOSFET is one of the most popular low power design technique for the existing challenges, while other techniques have obvious shortcomings. In this work, a CTCM Gate-driven (GD) differentiator has been presented with a frequency range from dc to 100MHz which operates at very low supply voltage of 0.7 volts. A novel CTCM Bulk-driven (BD) differentiator has been designed for the first time which reduces the power consumption multiple times that of GD differentiator. These GD and BD differentiator has been simulated using CADENCE TSMC 65nm technology for all the bilinear and biquadratic band-pass frequency responses. These basic building blocks can be used to implement the higher order filters. A 6th order cascade CTCM Chebyshev band-pass filter has been designed using the GD and BD techniques. As a conclusion, a low power GD and BD 6th order chebyshev stagger-tuned band-pass filter was simulated and all the parameters obtained from all the resulting realizations are analyzed and compared. Monte Carlo analysis is performed for both the 6th order filters and the results of sensitivity analysis are presented.

Keywords: bulk-driven (BD), continuous-time current-mode filters (CTCM), gate-driven (GD)

Procedia PDF Downloads 260
4302 Framework Development of Carbon Management Software Tool in Sustainable Supply Chain Management of Indian Industry

Authors: Sarbjit Singh

Abstract:

This framework development explored the status of GSCM in manufacturing SMEs and concluded that there was a significant gap w.r.t carbon emissions measurement in the supply chain activities. The measurement of carbon emissions within supply chains is important green initiative toward its reduction. The majority of the SMEs were facing the problem to quantify the green house gas emissions in its supply chain & to make it a low carbon supply chain or GSCM. Thus, the carbon management initiatives were amalgamated with the supply chain activities in order to measure and reduce the carbon emissions, confirming the GHG protocol scopes. Henceforth, it covers the development of carbon management software (CMS) tool to quantify carbon emissions for effective carbon management. This tool is cheap and easy to use for the industries for the management of their carbon emissions within the supply chain.

Keywords: w.r.t carbon emissions, carbon management software, supply chain management, Indian Industry

Procedia PDF Downloads 469
4301 The Methods of Customer Satisfaction Measurement and Its Statistical Analysis towards Sales and Logistic Activities in Food Sector

Authors: Seher Arslankaya, Bahar Uludağ

Abstract:

Meeting the needs and demands of customers and pleasing the customers are important requirements for companies in food sectors where the growth of competition is significantly unpredictable. Customer satisfaction is also one of the key concepts which is mainly driven by wide range of customer preference and expectation upon products and services introduced and delivered to them. In order to meet the customer demands, the companies that engage in food sectors are expected to have a well-managed set of Total Quality Management (TQM), which sets out to improve quality of products and services; to reduce costs and to increase customer satisfaction by restructuring traditional management practices. It aims to increase customer satisfaction by meeting (their) customer expectations and requirements. The achievement would be determined with the help of customer satisfaction surveys, which is done to obtain immediate feedback and to provide quick responses. In addition, the surveys would also assist the making of strategic planning which helps to anticipate customer future needs and expectations. Meanwhile, periodic measurement of customer satisfaction would be a must because with the better understanding of customers perceptions from the surveys (done by questioners), the companies would have a clear idea to identify their own strengths and weaknesses that help the companies keep their loyal customers; to stand in comparison toward their competitors and map out their future progress and improvement. In this study, we propose a survey based on customer satisfaction measurement method and its statistical analysis for sales and logistic activities of food firms. Customer satisfaction would be discussed in details. Furthermore, after analysing the data derived from the questionnaire that applied to customers by using the SPSS software, various results obtained from the application would be presented. By also applying ANOVA test, the study would analysis the existence of meaningful differences between customer demographic proportion and their perceptions. The purpose of this study is also to find out requirements which help to remove the effects that decrease customer satisfaction and produce loyal customers in food industry. For this purpose, the customer complaints are collected. Additionally, comments and suggestions are done according to the obtained results of surveys, which would be useful for the making-process of strategic planning in food industry.

Keywords: customer satisfaction measurement and analysis, food industry, SPSS, TQM

Procedia PDF Downloads 250
4300 Prediction of Bubbly Plume Characteristics Using the Self-Similarity Model

Authors: Li Chen, Alex Skvortsov, Chris Norwood

Abstract:

Gas releasing into water can be found in for many industrial situations. This process results in the formation of bubbles and acoustic emission which depends upon the bubble characteristics. If the bubble creation rates (bubble volume flow rate) are of interest, an inverse method has to be used based on the measurement of acoustic emission. However, there will be sound attenuation through the bubbly plume which will influence the measurement and should be taken into consideration in the model. The sound transmission through the bubbly plume depends on the characteristics of the bubbly plume, such as the shape and the bubble distributions. In this study, the bubbly plume shape is modelled using a self-similarity model, which has been normally applied for a single phase buoyant plume. The prediction is compared with the experimental data. It has been found the model can be applied to a buoyant plume of gas-liquid mixture. The influence of the gas flow rate and discharge nozzle size is studied.

Keywords: bubbly plume, buoyant plume, bubble acoustics, self-similarity model

Procedia PDF Downloads 287
4299 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping

Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa

Abstract:

The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.

Keywords: neural network computing, continuous functions generating the input-output mapping, decreasing the training time, machines with big memories

Procedia PDF Downloads 283
4298 The High Precision of Magnetic Detection with Microwave Modulation in Solid Spin Assembly of NV Centres in Diamond

Authors: Zongmin Ma, Shaowen Zhang, Yueping Fu, Jun Tang, Yunbo Shi, Jun Liu

Abstract:

Solid-state quantum sensors are attracting wide interest because of their high sensitivity at room temperature. In particular, spin properties of nitrogen–vacancy (NV) color centres in diamond make them outstanding sensors of magnetic fields, electric fields and temperature under ambient conditions. Much of the work on NV magnetic sensing has been done so as to achieve the smallest volume, high sensitivity of NV ensemble-based magnetometry using micro-cavity, light-trapping diamond waveguide (LTDW), nano-cantilevers combined with MEMS (Micro-Electronic-Mechanical System) techniques. Recently, frequency-modulated microwaves with continuous optical excitation method have been proposed to achieve high sensitivity of 6 μT/√Hz using individual NV centres at nanoscale. In this research, we built-up an experiment to measure static magnetic field through continuous wave optical excitation with frequency-modulated microwaves method under continuous illumination with green pump light at 532 nm, and bulk diamond sample with a high density of NV centers (1 ppm). The output of the confocal microscopy was collected by an objective (NA = 0.7) and detected by a high sensitivity photodetector. We design uniform and efficient excitation of the micro strip antenna, which is coupled well with the spin ensembles at 2.87 GHz for zero-field splitting of the NV centers. Output of the PD signal was sent to an LIA (Lock-In Amplifier) modulated signal, generated by the microwave source by IQ mixer. The detected signal is received by the photodetector, and the reference signal enters the lock-in amplifier to realize the open-loop detection of the NV atomic magnetometer. We can plot ODMR spectra under continuous-wave (CW) microwave. Due to the high sensitivity of the lock-in amplifier, the minimum detectable value of the voltage can be measured, and the minimum detectable frequency can be made by the minimum and slope of the voltage. The magnetic field sensitivity can be derived from η = δB√T corresponds to a 10 nT minimum detectable shift in the magnetic field. Further, frequency analysis of the noise in the system indicates that at 10Hz the sensitivity less than 10 nT/√Hz.

Keywords: nitrogen-vacancy (NV) centers, frequency-modulated microwaves, magnetic field sensitivity, noise density

Procedia PDF Downloads 440
4297 Fractal Analysis of Some Bifurcations of Discrete Dynamical Systems in Higher Dimensions

Authors: Lana Horvat Dmitrović

Abstract:

The main purpose of this paper is to study the box dimension as fractal property of bifurcations of discrete dynamical systems in higher dimensions. The paper contains the fractal analysis of the orbits near the hyperbolic and non-hyperbolic fixed points in discrete dynamical systems. It is already known that in one-dimensional case the orbit near the hyperbolic fixed point has the box dimension equal to zero. On the other hand, the orbit near the non-hyperbolic fixed point has strictly positive box dimension which is connected to the non-degeneracy condition of certain bifurcation. One of the main results in this paper is the generalisation of results about box dimension near the hyperbolic and non-hyperbolic fixed points to higher dimensions. In the process of determining box dimension, the restriction of systems to stable, unstable and center manifolds, Lipschitz property of box dimension and the notion of projective box dimension are used. The analysis of the bifurcations in higher dimensions with one multiplier on the unit circle is done by using the normal forms on one-dimensional center manifolds. This specific change in box dimension of an orbit at the moment of bifurcation has already been explored for some bifurcations in one and two dimensions. It was shown that specific values of box dimension are connected to appropriate bifurcations such as fold, flip, cusp or Neimark-Sacker bifurcation. This paper further explores this connection of box dimension as fractal property to some specific bifurcations in higher dimensions, such as fold-flip and flip-Neimark-Sacker. Furthermore, the application of the results to the unit time map of continuous dynamical system near hyperbolic and non-hyperbolic singularities is presented. In that way, box dimensions which are specific for certain bifurcations of continuous systems can be obtained. The approach to bifurcation analysis by using the box dimension as specific fractal property of orbits can lead to better understanding of bifurcation phenomenon. It could also be useful in detecting the existence or nonexistence of bifurcations of discrete and continuous dynamical systems.

Keywords: bifurcation, box dimension, invariant manifold, orbit near fixed point

Procedia PDF Downloads 254
4296 Surfactant Improved Heavy Oil Recovery in Sandstone Reservoirs by Wettability Alteration

Authors: Rabia Hunky, Hayat Kalifa, Bai

Abstract:

The wettability of carbonate reservoirs has been widely recognized as an important parameter in oil recovery by flooding technology. Many surfactants have been studied for this application. However, the importance of wettability alteration in sandstone reservoirs by surfactant has been poorly studied. In this paper, our recent study of the relationship between rock surface wettability and cumulative oil recovery for sandstone cores is reported. In our research, it has been found there is a good agreement between the wettability and oil recovery. Nonionic surfactants, Tomadol® 25-12 and Tomadol® 45-13, are very effective in wettability alteration of sandstone core surface from highly oil-wet conditions to water-wet conditions. By spontaneous imbibition test, Interfacial tension, and contact angle measurement these two surfactants exhibit the highest recovery of the synthetic oil made with heavy oil. Based on these experimental results, we can further conclude that the contact angle measurement and imbibition test can be used as rapid screening tools to identify better EOR surfactants to increase heavy oil recovery from sandstone reservoirs.

Keywords: EOR, oil gas, IOR, WC, IF, oil and gas

Procedia PDF Downloads 103
4295 Stabilization of Rotational Motion of Spacecrafts Using Quantized Two Torque Inputs Based on Random Dither

Authors: Yusuke Kuramitsu, Tomoaki Hashimoto, Hirokazu Tahara

Abstract:

The control problem of underactuated spacecrafts has attracted a considerable amount of interest. The control method for a spacecraft equipped with less than three control torques is useful when one of the three control torques had failed. On the other hand, the quantized control of systems is one of the important research topics in recent years. The random dither quantization method that transforms a given continuous signal to a discrete signal by adding artificial random noise to the continuous signal before quantization has also attracted a considerable amount of interest. The objective of this study is to develop the control method based on random dither quantization method for stabilizing the rotational motion of a rigid spacecraft with two control inputs. In this paper, the effectiveness of random dither quantization control method for the stabilization of rotational motion of spacecrafts with two torque inputs is verified by numerical simulations.

Keywords: spacecraft control, quantized control, nonlinear control, random dither method

Procedia PDF Downloads 180
4294 Improvement of Camera Calibration Based on the Relationship between Focal Length and Aberration Coefficient

Authors: Guorong Sui, Xingwei Jia, Chenhui Yin, Xiumin Gao

Abstract:

In the processing of camera-based high precision and non-contact measurement, the geometric-optical aberration is always inevitably disturbing the measuring system. Moreover, the aberration is different with the different focal length, which will increase the difficulties of the system’s calibration. Therefore, to understand the relationship between the focal length as a function of aberration properties is a very important issue to the calibration of the measuring systems. In this study, we propose a new mathematics model, which is based on the plane calibration method by Zhang Zhengyou, and establish a relationship between the focal length and aberration coefficient. By using the mathematics model and carefully modified compensation templates, the calibration precision of the system can be dramatically improved. The experiment results show that the relative error is less than 1%. It is important for optoelectronic imaging systems that apply to measure, track and position by changing the camera’s focal length.

Keywords: camera calibration, aberration coefficient, vision measurement, focal length, mathematics model

Procedia PDF Downloads 364
4293 Modeling of Particle Reduction and Volatile Compounds Profile during Chocolate Conching by Electronic Nose and Genetic Programming (GP) Based System

Authors: Juzhong Tan, William Kerr

Abstract:

Conching is one critical procedure in chocolate processing, where special flavors are developed, and smooth mouse feel the texture of the chocolate is developed due to particle size reduction of cocoa mass and other additives. Therefore, determination of the particle size and volatile compounds profile of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products. Currently, precise particle size measurement is usually done by laser scattering which is expensive and inaccessible to small/medium size chocolate manufacturers. Also, some other alternatives, such as micrometer and microscopy, can’t provide good measurements and provide little information. Volatile compounds analysis of cocoa during conching, has similar problems due to its high cost and limited accessibility. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was inserted to a conching machine and was used to monitoring the volatile compound profile of chocolate during the conching. A model correlated volatile compounds profiles along with factors including the content of cocoa, sugar, and the temperature during the conching to particle size of chocolate particles by genetic programming was established. The model was used to predict the particle size reduction of chocolates with different cocoa mass to sugar ratio (1:2, 1:1, 1.5:1, 2:1) at 8 conching time (15min, 30min, 1h, 1.5h, 2h, 4h, 8h, and 24h). And the predictions were compared to laser scattering measurements of the same chocolate samples. 91.3% of the predictions were within the range of later scatting measurement ± 5% deviation. 99.3% were within the range of later scatting measurement ± 10% deviation.

Keywords: cocoa bean, conching, electronic nose, genetic programming

Procedia PDF Downloads 255
4292 Use of In-line Data Analytics and Empirical Model for Early Fault Detection

Authors: Hyun-Woo Cho

Abstract:

Automatic process monitoring schemes are designed to give early warnings for unusual process events or abnormalities as soon as possible. For this end, various techniques have been developed and utilized in various industrial processes. It includes multivariate statistical methods, representation skills in reduced spaces, kernel-based nonlinear techniques, etc. This work presents a nonlinear empirical monitoring scheme for batch type production processes with incomplete process measurement data. While normal operation data are easy to get, unusual fault data occurs infrequently and thus are difficult to collect. In this work, noise filtering steps are added in order to enhance monitoring performance by eliminating irrelevant information of the data. The performance of the monitoring scheme was demonstrated using batch process data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.

Keywords: batch process, monitoring, measurement, kernel method

Procedia PDF Downloads 323
4291 Optimization of Solar Tracking Systems

Authors: A. Zaher, A. Traore, F. Thiéry, T. Talbert, B. Shaer

Abstract:

In this paper, an intelligent approach is proposed to optimize the orientation of continuous solar tracking systems on cloudy days. Considering the weather case, the direct sunlight is more important than the diffuse radiation in case of clear sky. Thus, the panel is always pointed towards the sun. In case of an overcast sky, the solar beam is close to zero, and the panel is placed horizontally to receive the maximum of diffuse radiation. Under partly covered conditions, the panel must be pointed towards the source that emits the maximum of solar energy and it may be anywhere in the sky dome. Thus, the idea of our approach is to analyze the images, captured by ground-based sky camera system, in order to detect the zone in the sky dome which is considered as the optimal source of energy under cloudy conditions. The proposed approach is implemented using experimental setup developed at PROMES-CNRS laboratory in Perpignan city (France). Under overcast conditions, the results were very satisfactory, and the intelligent approach has provided efficiency gains of up to 9% relative to conventional continuous sun tracking systems.

Keywords: clouds detection, fuzzy inference systems, images processing, sun trackers

Procedia PDF Downloads 192
4290 Effects of Exercise Training in the Cold on Browning of White Fat in Obese Rats

Authors: Xiquan Weng, Chaoge Wang, Guoqin Xu, Wentao Lin

Abstract:

Objective: Cold exposure and exercise serve as two powerful physiological stimuli to launch the conversion of fat-accumulating white adipose tissue (WAT) into energy-dissipating brown adipose tissue (BAT). So far, it remains to be elucidated whether exercise plus cold exposure can produce an addictive effect on promoting WAT browning. Methods: 64 SD rats were subjected to high-fat and high-sugar diets for 9-week and successfully established an obesity model. They were randomly divided into 8 groups: normal control group (NC), normal exercise group (NE), continuous cold control group (CC), continuous cold exercise group (CE), intermittent cold control group (IC) and intermittent cold exercise group (IE). For continuous cold exposure, the rats stayed in a cold environment all day; For intermittent cold exposure, the rats were exposed to cold for only 4h per day. The protocol for treadmill exercises were as follows: 25m/min (speed), 0°C (slope), 30mins each time, an interval for 10 mins between two exercises, twice/two days, lasting for 5 weeks. Sampling were conducted on the 5th weekend. The body length and weight of the rats were measured, and the Lee's index was calculated. The visceral fat rate (VFR), subcutaneous fat rate (SFR), brown fat rate (BrFR) and body fat rate (BoFR) were measured by Micro-CT LCT200, and the expression of UCP1 protein in inguinal fat was examined by Western-blot. SPSS 22.0 was used for statistical analysis of the experimental results, and the ANOVA analysis was performed between groups (P < 0.05 was significant). Results: (1) Compared with the NC group, the weight of obese rats was significantly declined in the NE, CE and IE groups (P < 0.05), the Lee's index of obese rats significantly declined in the CE group (P < 0.05). Compared with the NE group, the weight of obese rats was significantly declined in the CE and IE groups (P < 0.05). (2)Compared with the NC group, the VFR and BoFR of the rats significantly declined in the NE, CE and IE groups (P < 0.05), the SFR of the rats significantly declined in the CE and IE groups (P < 0.05), and the BFR of the rats was significantly higher in the CC and IC groups (P < 0.05), respectively. Compared with the NE group, the VFR and BoFR of the rats significantly declined in the CE group (P < 0.05), the SFR of the rats was significantly higher in the CC and IS groups (P < 0.05), and the BrFR of the rats was significantly higher in the IC group (P < 0.05). (3)Compared with the NC group, the up-regulation of UCP1 protein expression in the inguinal fat of the rats was significant in the NE, CC, CE, IC and IE groups (P < 0.05). Compared with the NE group, the up-regulation of UCP1 protein expression in the inguinal fat of the rats was significant in the CC, CE and IE groups (P < 0.05). Conclusions: Exercise in the continuous and intermittent cold, especially in the former, can effectively decline the weight and body fat rate of obese rats. This is related to the effect of cold and exercise on the browning of white fat in rats.

Keywords: cold, browning of white fat, exercise, obesity

Procedia PDF Downloads 131
4289 An Application of Extreme Value Theory as a Risk Measurement Approach in Frontier Markets

Authors: Dany Ng Cheong Vee, Preethee Nunkoo Gonpot, Noor Sookia

Abstract:

In this paper, we consider the application of Extreme Value Theory as a risk measurement tool. The Value at Risk, for a set of indices, from six Stock Exchanges of Frontier markets is calculated using the Peaks over Threshold method and the performance of the model index-wise is evaluated using coverage tests and loss functions. Our results show that 'fat-tailedness' alone of the data is not enough to justify the use of EVT as a VaR approach. The structure of the returns dynamics is also a determining factor. This approach works fine in markets which have had extremes occurring in the past thus making the model capable of coping with extremes coming up (Colombo, Tunisia and Zagreb Stock Exchanges). On the other hand, we find that indices with lower past than present volatility fail to adequately deal with future extremes (Mauritius and Kazakhstan). We also conclude that using EVT alone produces quite static VaR figures not reflecting the actual dynamics of the data.

Keywords: extreme value theory, financial crisis 2008, value at risk, frontier markets

Procedia PDF Downloads 276
4288 Cooling Profile Analysis of Hot Strip Coil Using Finite Volume Method

Authors: Subhamita Chakraborty, Shubhabrata Datta, Sujay Kumar Mukherjea, Partha Protim Chattopadhyay

Abstract:

Manufacturing of multiphase high strength steel in hot strip mill have drawn significant attention due to the possibility of forming low temperature transformation product of austenite under continuous cooling condition. In such endeavor, reliable prediction of temperature profile of hot strip coil is essential in order to accesses the evolution of microstructure at different location of hot strip coil, on the basis of corresponding Continuous Cooling Transformation (CCT) diagram. Temperature distribution profile of the hot strip coil has been determined by using finite volume method (FVM) vis-à-vis finite difference method (FDM). It has been demonstrated that FVM offer greater computational reliability in estimation of contact pressure distribution and hence the temperature distribution for curved and irregular profiles, owing to the flexibility in selection of grid geometry and discrete point position, Moreover, use of finite volume concept allows enforcing the conservation of mass, momentum and energy, leading to enhanced accuracy of prediction.

Keywords: simulation, modeling, thermal analysis, coil cooling, contact pressure, finite volume method

Procedia PDF Downloads 473
4287 Developing a Framework for Assessing and Fostering the Sustainability of Manufacturing Companies

Authors: Ilaria Barletta, Mahesh Mani, Björn Johansson

Abstract:

The concept of sustainability encompasses economic, environmental, social and institutional considerations. Sustainable manufacturing (SM) is, therefore, a multi-faceted concept. It broadly implies the development and implementation of technologies, projects and initiatives that are concerned with the life cycle of products and services, and are able to bring positive impacts to the environment, company stakeholders and profitability. Because of this, achieving SM-related goals requires a holistic, life-cycle-thinking approach from manufacturing companies. Further, such an approach must rely on a logic of continuous improvement and ease of implementation in order to be effective. Currently, there exists in the academic literature no comprehensively structured frameworks that support manufacturing companies in the identification of the issues and the capabilities that can either hinder or foster sustainability. This scarcity of support extends to difficulties in obtaining quantifiable measurements in order to objectively evaluate solutions and programs and identify improvement areas within SM for standards conformance. To bridge this gap, this paper proposes the concept of a framework for assessing and continuously improving the sustainability of manufacturing companies. The framework addresses strategies and projects for SM and operates in three sequential phases: analysis of the issues, design of solutions and continuous improvement. A set of interviews, observations and questionnaires are the research methods to be used for the implementation of the framework. Different decision-support methods - either already-existing or novel ones - can be 'plugged into' each of the phases. These methods can assess anything from business capabilities to process maturity. In particular, the authors are working on the development of a sustainable manufacturing maturity model (SMMM) as decision support within the phase of 'continuous improvement'. The SMMM, inspired by previous maturity models, is made up of four maturity levels stemming from 'non-existing' to 'thriving'. Aggregate findings from the use of the framework should ultimately reveal to managers and CEOs the roadmap for achieving SM goals and identify the maturity of their companies’ processes and capabilities. Two cases from two manufacturing companies in Australia are currently being employed to develop and test the framework. The use of this framework will bring two main benefits: enable visual, intuitive internal sustainability benchmarking and raise awareness of improvement areas that lead companies towards an increasingly developed SM.

Keywords: life cycle management, continuous improvement, maturity model, sustainable manufacturing

Procedia PDF Downloads 266
4286 An Experimental Study on the Measurement of Fuel to Air Ratio Using Flame Chemiluminescence

Authors: Sewon Kim, Chang Yeop Lee, Minjun Kwon

Abstract:

This study is aiming at establishing the relationship between the optical signal of flame and an equivalent ratio of flame. In this experiment, flame optical signal in a furnace is measured using photodiode. The combustion system which is composed of metal fiber burner and vertical furnace and flame chemiluminescence is measured at various experimental conditions. In this study, the flame chemiluminescence of laminar premixed flame is measured by using commercially available photodiode. It is experimentally investigated the relationship between equivalent ratio and photodiode signal. In addition, The strategy of combustion control method is proposed by using the optical signal and fuel pressure. The results showed that certain relationship between optical data of photodiode and equivalence ratio exists and this leads to the successful application of this system for instantaneous measurement of equivalence ration of the combustion system.

Keywords: flame chemiluminescence, photo diode, equivalence ratio, combustion control

Procedia PDF Downloads 397
4285 Continuous and Discontinuos Modeling of Wellbore Instability in Anisotropic Rocks

Authors: C. Deangeli, P. Obentaku Obenebot, O. Omwanghe

Abstract:

The study focuses on the analysis of wellbore instability in rock masses affected by weakness planes. The occurrence of failure in such a type of rocks can occur in the rock matrix and/ or along the weakness planes, in relation to the mud weight gradient. In this case the simple Kirsch solution coupled with a failure criterion cannot supply a suitable scenario for borehole instabilities. Two different numerical approaches have been used in order to investigate the onset of local failure at the wall of a borehole. For each type of approach the influence of the inclination of weakness planes has been investigates, by considering joint sets at 0°, 35° and 90° to the horizontal. The first set of models have been carried out with FLAC 2D (Fast Lagrangian Analysis of Continua) by considering the rock material as a continuous medium, with a Mohr Coulomb criterion for the rock matrix and using the ubiquitous joint model for accounting for the presence of the weakness planes. In this model yield may occur in either the solid or along the weak plane, or both, depending on the stress state, the orientation of the weak plane and the material properties of the solid and weak plane. The second set of models have been performed with PFC2D (Particle Flow code). This code is based on the Discrete Element Method and considers the rock material as an assembly of grains bonded by cement-like materials, and pore spaces. The presence of weakness planes is simulated by the degradation of the bonds between grains along given directions. In general the results of the two approaches are in agreement. However the discrete approach seems to capture more complex phenomena related to local failure in the form of grain detachment at wall of the borehole. In fact the presence of weakness planes in the discontinuous medium leads to local instability along the weak planes also in conditions not predicted from the continuous solution. In general slip failure locations and directions do not follow the conventional wellbore breakout direction but depend upon the internal friction angle and the orientation of the bedding planes. When weakness plane is at 0° and 90° the behaviour are similar to that of a continuous rock material, but borehole instability is more severe when weakness planes are inclined at an angle between 0° and 90° to the horizontal. In conclusion, the results of the numerical simulations show that the prediction of local failure at the wall of the wellbore cannot disregard the presence of weakness planes and consequently the higher mud weight required for stability for any specific inclination of the joints. Despite the discrete approach can simulate smaller areas because of the large number of particles required for the generation of the rock material, however it seems to investigate more correctly the occurrence of failure at the miscroscale and eventually the propagation of the failed zone to a large portion of rock around the wellbore.

Keywords: continuous- discontinuous, numerical modelling, weakness planes wellbore, FLAC 2D

Procedia PDF Downloads 499
4284 MLOps Scaling Machine Learning Lifecycle in an Industrial Setting

Authors: Yizhen Zhao, Adam S. Z. Belloum, Goncalo Maia Da Costa, Zhiming Zhao

Abstract:

Machine learning has evolved from an area of academic research to a real-word applied field. This change comes with challenges, gaps and differences exist between common practices in academic environments and the ones in production environments. Following continuous integration, development and delivery practices in software engineering, similar trends have happened in machine learning (ML) systems, called MLOps. In this paper we propose a framework that helps to streamline and introduce best practices that facilitate the ML lifecycle in an industrial setting. This framework can be used as a template that can be customized to implement various machine learning experiment. The proposed framework is modular and can be recomposed to be adapted to various use cases (e.g. data versioning, remote training on cloud). The framework inherits practices from DevOps and introduces other practices that are unique to the machine learning system (e.g.data versioning). Our MLOps practices automate the entire machine learning lifecycle, bridge the gap between development and operation.

Keywords: cloud computing, continuous development, data versioning, DevOps, industrial setting, MLOps

Procedia PDF Downloads 265
4283 Photocatalytic Packed‐Bed Flow Reactor for Continuous Room‐Temperature Hydrogen Release from Liquid Organic Carriers

Authors: Malek Y. S. Ibrahim, Jeffrey A. Bennett, Milad Abolhasani

Abstract:

Despite the potential of hydrogen (H2) storage in liquid organic carriers to achieve carbon neutrality, the energy required for H2 release and the cost of catalyst recycling has hindered its large-scale adoption. In response, a photo flow reactor packed with rhodium (Rh)/titania (TiO2) photocatalyst was reported for the continuous and selective acceptorless dehydrogenation of 1,2,3,4-tetrahydroquinoline to H2 gas and quinoline under visible light irradiation at room temperature. The tradeoff between the reactor pressure drop and its photocatalytic surface area was resolved by selective in-situ photodeposition of Rh in the photo flow reactor post-packing on the outer surface of the TiO2 microparticles available to photon flux, thereby reducing the optimal Rh loading by 10 times compared to a batch reactor, while facilitating catalyst reuse and regeneration. An example of using quinoline as a hydrogen acceptor to lower the energy of the hydrogen production step was demonstrated via the water-gas shift reaction.

Keywords: hydrogen storage, flow chemistry, photocatalysis, solar hydrogen

Procedia PDF Downloads 99
4282 Contactless Heart Rate Measurement System based on FMCW Radar and LSTM for Automotive Applications

Authors: Asma Omri, Iheb Sifaoui, Sofiane Sayahi, Hichem Besbes

Abstract:

Future vehicle systems demand advanced capabilities, notably in-cabin life detection and driver monitoring systems, with a particular emphasis on drowsiness detection. To meet these requirements, several techniques employ artificial intelligence methods based on real-time vital sign measurements. In parallel, Frequency-Modulated Continuous-Wave (FMCW) radar technology has garnered considerable attention in the domains of healthcare and biomedical engineering for non-invasive vital sign monitoring. FMCW radar offers a multitude of advantages, including its non-intrusive nature, continuous monitoring capacity, and its ability to penetrate through clothing. In this paper, we propose a system utilizing the AWR6843AOP radar from Texas Instruments (TI) to extract precise vital sign information. The radar allows us to estimate Ballistocardiogram (BCG) signals, which capture the mechanical movements of the body, particularly the ballistic forces generated by heartbeats and respiration. These signals are rich sources of information about the cardiac cycle, rendering them suitable for heart rate estimation. The process begins with real-time subject positioning, followed by clutter removal, computation of Doppler phase differences, and the use of various filtering methods to accurately capture subtle physiological movements. To address the challenges associated with FMCW radar-based vital sign monitoring, including motion artifacts due to subjects' movement or radar micro-vibrations, Long Short-Term Memory (LSTM) networks are implemented. LSTM's adaptability to different heart rate patterns and ability to handle real-time data make it suitable for continuous monitoring applications. Several crucial steps were taken, including feature extraction (involving amplitude, time intervals, and signal morphology), sequence modeling, heart rate estimation through the analysis of detected cardiac cycles and their temporal relationships, and performance evaluation using metrics such as Root Mean Square Error (RMSE) and correlation with reference heart rate measurements. For dataset construction and LSTM training, a comprehensive data collection system was established, integrating the AWR6843AOP radar, a Heart Rate Belt, and a smart watch for ground truth measurements. Rigorous synchronization of these devices ensured data accuracy. Twenty participants engaged in various scenarios, encompassing indoor and real-world conditions within a moving vehicle equipped with the radar system. Static and dynamic subject’s conditions were considered. The heart rate estimation through LSTM outperforms traditional signal processing techniques that rely on filtering, Fast Fourier Transform (FFT), and thresholding. It delivers an average accuracy of approximately 91% with an RMSE of 1.01 beat per minute (bpm). In conclusion, this paper underscores the promising potential of FMCW radar technology integrated with artificial intelligence algorithms in the context of automotive applications. This innovation not only enhances road safety but also paves the way for its integration into the automotive ecosystem to improve driver well-being and overall vehicular safety.

Keywords: ballistocardiogram, FMCW Radar, vital sign monitoring, LSTM

Procedia PDF Downloads 72
4281 Fracture Crack Monitoring Using Digital Image Correlation Technique

Authors: B. G. Patel, A. K. Desai, S. G. Shah

Abstract:

The main of objective of this paper is to develop new measurement technique without touching the object. DIC is advance measurement technique use to measure displacement of particle with very high accuracy. This powerful innovative technique which is used to correlate two image segments to determine the similarity between them. For this study, nine geometrically similar beam specimens of different sizes with (steel fibers and glass fibers) and without fibers were tested under three-point bending in a closed loop servo-controlled machine with crack mouth opening displacement control with a rate of opening of 0.0005 mm/sec. Digital images were captured before loading (unreformed state) and at different instances of loading and were analyzed using correlation techniques to compute the surface displacements, crack opening and sliding displacements, load-point displacement, crack length and crack tip location. It was seen that the CMOD and vertical load-point displacement computed using DIC analysis matches well with those measured experimentally.

Keywords: Digital Image Correlation, fibres, self compacting concrete, size effect

Procedia PDF Downloads 389
4280 Assessing Perinatal Mental Illness during the COVID-19 Pandemic: A Review of Measurement Tools

Authors: Mya Achike

Abstract:

Background and Significance: Perinatal mental illness covers a wide range of conditions and has a huge influence on maternal-child health. Issues and challenges with perinatal mental health have been associated with poor pregnancy, birth, and postpartum outcomes. It is estimated that one out of five new and expectant mothers experience some degree of perinatal mental illness, which makes this a hugely significant health outcome. Certain factors increase the maternal risk for mental illness. Challenges related to poverty, migration, extreme stress, exposure to violence, emergency and conflict situations, natural disasters, and pandemics can exacerbate mental health disorders. It is widely expected that perinatal mental health is being negatively affected during the present COVID-19 pandemic. Methods: A review of studies that reported a measurement tool to assess perinatal mental health outcomes during the COVID-19 pandemic was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. PubMed, CINAHL, and Google Scholar were used to search for peer-reviewed studies published after late 2019, in accordance with the emergence of the virus. The search resulted in the inclusion of ten studies. Approach to measure health outcome: The main approach to measure perinatal mental illness is the use of self-administered, validated questionnaires, usually in the clinical setting. Summary: Widespread use of these tools has afforded the clinical and research communities the ability to identify and support women who may be suffering from mental illness disorders during a pandemic. More research is needed to validate tools in other vulnerable, perinatal populations.

Keywords: mental health during covid, perinatal mental health, perinatal mental health measurement tools, perinatal mental health tools

Procedia PDF Downloads 135
4279 Dynamic Measurement System Modeling with Machine Learning Algorithms

Authors: Changqiao Wu, Guoqing Ding, Xin Chen

Abstract:

In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.

Keywords: dynamic system modeling, neural network, normal equation, second order gradient descent

Procedia PDF Downloads 127
4278 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 330