Search results for: type i error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8297

Search results for: type i error

7997 Load Forecasting in Short-Term Including Meteorological Variables for Balearic Islands Paper

Authors: Carolina Senabre, Sergio Valero, Miguel Lopez, Antonio Gabaldon

Abstract:

This paper presents a comprehensive survey of the short-term load forecasting (STLF). Since the behavior of consumers and producers continue changing as new technologies, it is an ongoing process, and moreover, new policies become available. The results of a research study for the Spanish Transport System Operator (REE) is presented in this paper. It is presented the improvement of the forecasting accuracy in the Balearic Islands considering the introduction of meteorological variables, such as temperature to reduce forecasting error. Variables analyzed for the forecasting in terms of overall accuracy are cloudiness, solar radiation, and wind velocity. It has also been analyzed the type of days to be considered in the research.

Keywords: short-term load forecasting, power demand, neural networks, load forecasting

Procedia PDF Downloads 159
7996 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.

Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking

Procedia PDF Downloads 371
7995 Comparison of the Distillation Curve Obtained Experimentally with the Curve Extrapolated by a Commercial Simulator

Authors: Lívia B. Meirelles, Erika C. A. N. Chrisman, Flávia B. de Andrade, Lilian C. M. de Oliveira

Abstract:

True Boiling Point distillation (TBP) is one of the most common experimental techniques for the determination of petroleum properties. This curve provides information about the performance of petroleum in terms of its cuts. The experiment is performed in a few days. Techniques are used to determine the properties faster with a software that calculates the distillation curve when a little information about crude oil is known. In order to evaluate the accuracy of distillation curve prediction, eight points of the TBP curve and specific gravity curve (348 K and 523 K) were inserted into the HYSYS Oil Manager, and the extended curve was evaluated up to 748 K. The methods were able to predict the curve with the accuracy of 0.6%-9.2% error (Software X ASTM), 0.2%-5.1% error (Software X Spaltrohr).

Keywords: distillation curve, petroleum distillation, simulation, true boiling point curve

Procedia PDF Downloads 417
7994 Energetic and Exergetic Evaluation of Box-Type Solar Cookers Using Different Insulation Materials

Authors: A. K. Areamu, J. C. Igbeka

Abstract:

The performance of box-type solar cookers has been reported by several researchers but little attention was paid to the effect of the type of insulation material on the energy and exergy efficiency of these cookers. This research aimed at evaluating the energy and exergy efficiencies of the box-type cookers containing different insulation materials. Energy and exergy efficiencies of five box-type solar cookers insulated with maize cob, air (control), maize husk, coconut coir and polyurethane foam respectively were obtained over a period of three years. The cookers were evaluated using water heating test procedures in determining the energy and exergy analysis. The results were subjected to statistical analysis using ANOVA. The result shows that the average energy input for the five solar cookers were: 245.5, 252.2, 248.7, 241.5 and 245.5J respectively while their respective average energy losses were: 201.2, 212.7, 208.4, 189.1 and 199.8J. The average exergy input for five cookers were: 228.2, 234.4, 231.1, 224.4 and 228.2J respectively while their respective average exergy losses were: 223.4, 230.6, 226.9, 218.9 and 223.0J. The energy and exergy efficiency was highest in the cooker with coconut coir (37.35 and 3.90% respectively) in the first year but was lowest for air (11 and 1.07% respectively) in the third year. Statistical analysis showed significant difference between the energy and exergy efficiencies over the years. These results reiterate the importance of a good insulating material for a box-type solar cooker.

Keywords: efficiency, energy, exergy, heating insolation

Procedia PDF Downloads 350
7993 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution

Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang

Abstract:

Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.

Keywords: parallel compressor model (pcm), revised calculation method, inlet distortion, outlet unequal pressure distribution

Procedia PDF Downloads 310
7992 Correlation between Microalbuminuria and Hypertension in Type 2 Diabetic Patients

Authors: Alia Ali, Azeem Taj, Muhammed Joher Amin, Farrukh Iqbal, Zafar Iqbal

Abstract:

Background: Hypertension is commonly found in patients with Diabetic Kidney Disease (DKD). Microalbuminuria is the first clinical sign of involvement of kidneys in patients with type 2 diabetes. Uncontrolled hypertension induces a higher risk of cardiovascular events, including death, increasing proteinuria and progression to kidney disease. Objectives: To determine the correlation between microalbuminuria and hypertension and their association with other risk factors in type 2 diabetic patients. Methods: One hundred and thirteen type 2 diabetic patients were screened for microalbuminuria and raised blood pressure, attending the diabetic clinic of Shaikh Zayed Hospital, Lahore, Pakistan. The study was conducted from November 2012 to June 2013. Results: Patients were divided into two groups. Group 1, those with normoalbuminuria (n=63) and Group 2, those having microalbuminuria (n=50). Group 2 patients showed higher blood pressure values as compared to Group 1. The results were statistically significant and showed poor glycemic control as a contributing risk factor. Conclusion: The study concluded that there is high frequency of hypertension among type 2 diabetics but still much higher among those having microalbuminuria. So, early recognition of renal dysfunction through detection of microalbuminuria and to start treatment without any delay will confer future protection from end-stage renal disease as well as hypertension and its complications in type 2 diabetic patients.

Keywords: hypertension, microalbuminuria, diabetic kidney disease, type 2 Diabetes mellitus

Procedia PDF Downloads 372
7991 Aggregate Supply Response of Some Livestock Commodities in Algeria: Cointegration- Vector Error Correction Model Approach

Authors: Amine M. Benmehaia, Amine Oulmane

Abstract:

The supply response of agricultural commodities to changes in price incentives is an important issue for the success of any policy reform in the agricultural sector. This study aims to quantify the responsiveness of producers of some livestock commodities to price incentives in Algerian context. Time series analysis is used on annual data for a period of 52 years (1966-2018). Both co-integration and vector error correction model (VECM) are used through the Nerlove model of partial adjustment. The study attempts to determine the long-run and short-run relationships along with the magnitudes of disequilibria in the selected commodities. Results show that the short-run price elasticities are low in cow and sheep meat sectors (8.7 and 8% respectively), while their respective long-run elasticities are 16.5 and 10.5, whereas eggs and milk have very high short-run price elasticities (82 and 90% respectively) with long-run elasticities of 40 and 46 respectively. The error correction coefficient, reflecting the speed of adjustment towards the long-run equilibrium, is statistically significant and have the expected negative sign. Its estimates are 12.7 for cow meat, 33.5 for sheep meat, 46.7 for eggs and 8.4 for milk. It seems that cow meat and milk producers have a weak feedback of about 12.7% and 8.4% respectively of the previous year's disequilibrium from the long-run price elasticity, whereas sheep meat and eggs producers adjust to correct long run disequilibrium with a high speed of adjustment (33.5% and 46.7 % respectively). The implication of this is that much more in-depth research is needed to identify those factors that affect agricultural supply and to describe the effect of factors that shift supply in response to price incentives. This could provide valuable information for government in the use of appropriate policy measures.

Keywords: Algeria, cointegration, livestock, supply response, vector error correction model

Procedia PDF Downloads 111
7990 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models

Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh

Abstract:

In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.

Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals

Procedia PDF Downloads 276
7989 Cellular Traffic Prediction through Multi-Layer Hybrid Network

Authors: Supriya H. S., Chandrakala B. M.

Abstract:

Deep learning based models have been recently successful adoption for network traffic prediction. However, training a deep learning model for various prediction tasks is considered one of the critical tasks due to various reasons. This research work develops Multi-Layer Hybrid Network (MLHN) for network traffic prediction and analysis; MLHN comprises the three distinctive networks for handling the different inputs for custom feature extraction. Furthermore, an optimized and efficient parameter-tuning algorithm is introduced to enhance parameter learning. MLHN is evaluated considering the “Big Data Challenge” dataset considering the Mean Absolute Error, Root Mean Square Error and R^2as metrics; furthermore, MLHN efficiency is proved through comparison with a state-of-art approach.

Keywords: MLHN, network traffic prediction

Procedia PDF Downloads 60
7988 Profitability Assessment of Granite Aggregate Production and the Development of a Profit Assessment Model

Authors: Melodi Mbuyi Mata, Blessing Olamide Taiwo, Afolabi Ayodele David

Abstract:

The purpose of this research is to create empirical models for assessing the profitability of granite aggregate production in Akure, Ondo state aggregate quarries. In addition, an artificial neural network (ANN) model and multivariate predicting models for granite profitability were developed in the study. A formal survey questionnaire was used to collect data for the study. The data extracted from the case study mine for this study includes granite marketing operations, royalty, production costs, and mine production information. The following methods were used to achieve the goal of this study: descriptive statistics, MATLAB 2017, and SPSS16.0 software in analyzing and modeling the data collected from granite traders in the study areas. The ANN and Multi Variant Regression models' prediction accuracy was compared using a coefficient of determination (R²), Root mean square error (RMSE), and mean square error (MSE). Due to the high prediction error, the model evaluation indices revealed that the ANN model was suitable for predicting generated profit in a typical quarry. More quarries in Nigeria's southwest region and other geopolitical zones should be considered to improve ANN prediction accuracy.

Keywords: national development, granite, profitability assessment, ANN models

Procedia PDF Downloads 79
7987 Improved Performance Scheme for Joint Transmission in Downlink Coordinated Multi-Point Transmission

Authors: Young-Su Ryu, Su-Hyun Jung, Myoung-Jin Kim, Hyoung-Kyu Song

Abstract:

In this paper, improved performance scheme for joint transmission is proposed in downlink (DL) coordinated multi-point(CoMP) in case of constraint transmission power. This scheme is that serving transmission point (TP) request a joint transmission to inter-TP and selects one pre-coding technique according to channel state information(CSI) from user equipment(UE). The simulation results show that the bit error rate(BER) and throughput performances of the proposed scheme provide high spectral efficiency and reliable data at the cell edge.

Keywords: CoMP, joint transmission, minimum mean square error, zero-forcing, zero-forcing dirty paper coding

Procedia PDF Downloads 530
7986 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 109
7985 The Effect of Iron Deficiency on the Magnetic Properties of Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ M-Type Hexaferrites

Authors: Kang-Hyuk Lee, Wei Yan, Sang-Im Yoo

Abstract:

Recently, Ca₁₋ₓLaₓFe₁₂O₁₉ (Ca-La M-type) hexaferrites have been reported to possess higher crystalline anisotropy compared with SrFe₁₂O₁₉ (Sr M-type) hexaferrite without reducing its saturation magnetization (Ms), resulting in higher coercivity (Hc). While iron deficiency is known to be helpful for the growth and the formation of NiZn spinel ferrites, the effect of iron deficiency in Ca-La M-type hexaferrites has never been reported yet. In this study, therefore, we tried to investigate the effect of iron deficiency on the magnetic properties of Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ hexaferrites prepared by solid state reaction. As-calcined powder was pressed into pellets and sintered at 1275~1325℃ for 4 h in air. Samples were characterized by powder X-ray diffraction (XRD), vibrating sample magnetometer (VSM), and scanning electron microscope (SEM). Powder XRD analyses revealed that Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ (0.75 ≦ y ≦ 2.15) ferrites calcined at 1250-1300℃ for 12 h in air were composed of single phase without the second phases. With increasing the iron deficiency, y, the lattice parameters a, c and unite cell volumes were decreased first up to y=10.25 and then increased again. The highest Ms value of 77.5 emu/g was obtainable from the sample of Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ sintered at 1300℃ for 4 h in air. Detailed microstructures and magnetic properties of Ca-La M-type hexagonal ferrites will be presented for a discussion

Keywords: Ca-La M-type hexaferrite, magnetic properties, iron deficiency, hexaferrite

Procedia PDF Downloads 433
7984 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation

Authors: Hangsik Shin

Abstract:

The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.

Keywords: peak detection, photoplethysmography, sampling, signal reconstruction

Procedia PDF Downloads 343
7983 Estimation of Population Mean under Random Non-Response in Two-Occasion Successive Sampling

Authors: M. Khalid, G. N. Singh

Abstract:

In this paper, we have considered the problems of estimation for the population mean on current (second) occasion in two-occasion successive sampling under random non-response situations. Some modified exponential type estimators have been proposed and their properties are studied under the assumptions that the number of sampling unit follows a discrete distribution due to random non-response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.

Keywords: modified exponential estimator, successive sampling, random non-response, auxiliary variable, bias, mean square error

Procedia PDF Downloads 331
7982 Block Implicit Adams Type Algorithms for Solution of First Order Differential Equation

Authors: Asabe Ahmad Tijani, Y. A. Yahaya

Abstract:

The paper considers the derivation of implicit Adams-Moulton type method, with k=4 and 5. We adopted the method of interpolation and collocation of power series approximation to generate the continuous formula which was evaluated at off-grid and some grid points within the step length to generate the proposed block schemes, the schemes were investigated and found to be consistent and zero stable. Finally, the methods were tested with numerical experiments to ascertain their level of accuracy.

Keywords: Adam-Moulton Type (AMT), off-grid, block method, consistent and zero stable

Procedia PDF Downloads 463
7981 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values

Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie

Abstract:

Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.

Keywords: initial input, iterative learning control, maximum input, singular values

Procedia PDF Downloads 221
7980 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 134
7979 Assessment of Time-variant Work Stress for Human Error Prevention

Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee

Abstract:

For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.

Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention

Procedia PDF Downloads 649
7978 Banking Sector Development and Economic Growth: Evidence from the State of Qatar

Authors: Fekri Shawtari

Abstract:

The banking sector plays a very crucial role in the economic development of the country. As a financial intermediary, it has assigned a great role in the economic growth and stability. This paper aims to examine the empirically the relationship between banking industry and economic growth in state of Qatar. We adopt the VAR vector error correction model (VECM) along with Granger causality to address the issue over the long-run and short-run between the banking sector and economic growth. It is expected that the results will give policy directions to the policymakers to make strategies that are conducive toward boosting development to achieve the targeted economic growth in current situation.

Keywords: economic growth, banking sector, Qatar, vector error correction model, VECM

Procedia PDF Downloads 147
7977 Isolation and Identification of Diacylglycerol Acyltransferase Type-2 (GAT2) Genes from Three Egyptian Olive Cultivars

Authors: Yahia I. Mohamed, Ahmed I. Marzouk, Mohamed A. Yacout

Abstract:

Aim of this work was to study the genetic basis for oil accumulation in olive fruit via tracking DGAT2 (Diacylglycerol acyltransferase type-2) gene in three Egyptian Origen Olive cultivars namely Toffahi, Hamed and Maraki using molecular marker techniques and bioinformatics tools. Results illustrate that, firstly: specific genomic band of Maraki cultivars was identified as DGAT2 (Diacylglycerol acyltransferase type-2) and identical for this gene in Olea europaea with 100 % of similarity. Secondly, differential genomic band of Maraki cultivars which produced from RAPD fingerprinting technique reflected predicted distinguished sequence which identified as DGAT2 (Diacylglycerol acyltransferase type-2) in Fragaria vesca subsp. Vesca with 76% of sequential similarity. Third and finally, specific genomic specific band of Hamed cultivars was indentified as two fragments, 1-Olea europaea cultivar Koroneiki diacylglycerol acyltransferase type 2 mRNA, complete cds with two matches regions with 99% or 2-PREDICTED: Fragaria vesca subsp. vesca diacylglycerol O-acyltransferase 2-like (LOC101313050), mRNA with 86% of similarity.

Keywords: Olea europaea, fingerprinting, diacylglycerol acyltransferase type-2 (DGAT2), Egypt

Procedia PDF Downloads 475
7976 Virtual Assessment of Measurement Error in the Fractional Flow Reserve

Authors: Keltoum Chahour, Mickael Binois

Abstract:

Due to a lack of standardization during the invasive fractional flow reserve (FFR) procedure, the index is subject to many sources of uncertainties. In this paper, we investigate -through simulation- the effect of the (FFR) device position and configuration on the obtained value of the (FFR) fraction. For this purpose, we use computational fluid dynamics (CFD) in a 3D domain corresponding to a diseased arterial portion. The (FFR) pressure captor is introduced inside it with a given length and coefficient of bending to capture the (FFR) value. To get over the computational limitations, basically, the time of the simulation is about 2h 15min for one (FFR) value; we generate a Gaussian Process (GP) model for (FFR) prediction. The (GP) model indicates good accuracy and demonstrates the effective error in the measurement created by the random configuration of the pressure captor.

Keywords: fractional flow reserve, Gaussian processes, computational fluid dynamics, drift

Procedia PDF Downloads 103
7975 A Retrospective Study on the Age of Onset for Type 2 Diabetes Diagnosis

Authors: Mohamed A. Hammad, Dzul Azri Mohamed Noor, Syed Azhar Syed Sulaiman, Majed Ahmed Al-Mansoub, Muhammad Qamar

Abstract:

There is a progressive increase in the prevalence of early onset Type 2 diabetes mellitus. Early detection of Type 2 diabetes enhances the length and/or quality of life which might result from a reduction in the severity, frequency or prevent or delay of its long-term complications. The study aims to determine the onset age for the first diagnosis of Type 2 diabetes mellitus. A retrospective study conducted in the endocrine clinic at Hospital Pulau Pinang in Penang, Malaysia, January- December 2016. Records of 519 patients with Type 2 diabetes mellitus were screened to collect demographic data and determine the age of first-time diabetes mellitus diagnosis. Patients classified according to the age of diagnosis, gender, and ethnicity. The study included 519 patients with age (55.6±13.7) years, female 265 (51.1%) and male 254 (48.9%). The ethnicity distribution was Malay 191 (36.8%), Chinese 189 (36.4%) and Indian 139 (26.8%). The age of Type 2 diabetes diagnosis was (42±14.8) years. The female onset of diabetes mellitus was at age (41.5±13.7) years, while male (42.6±13.7) years. Distribution of diabetic onset by ethnicity was Malay at age (40.7±13.7) years, Chinese (43.2±13.7) years and Indian (42.3±13.7) years. Diabetic onset was classified by age as follow; ≤20 years’ cohort was 33 (6.4%) cases. Group >20- ≤40 years was 190 (36.6%) patients, and category >40- ≤60 years was 270 (52%) subjects. On the other hand, the group >60 years was 22 (4.2%) patients. The range of diagnosis was between 10 and 73 years old. Conclusion: Malay and female have an earlier onset of diabetes than Indian, Chinese and male. More than half of the patients had diabetes between 40 and 60 years old. Diabetes mellitus is becoming more common in younger age <40 years. The age at diagnosis of Type 2 diabetes mellitus has decreased with time.

Keywords: age of onset, diabetes diagnosis, diabetes mellitus, Malaysia, outpatients, type 2 diabetes, retrospective study

Procedia PDF Downloads 386
7974 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 53
7973 Parameter Estimation of Additive Genetic and Unique Environment (AE) Model on Diabetes Mellitus Type 2 Using Bayesian Method

Authors: Andi Darmawan, Dewi Retno Sari Saputro, Purnami Widyaningsih

Abstract:

Diabetes mellitus (DM) is a chronic disease in human that occurred if pancreas cannot produce enough of insulin hormone or the body uses ineffectively insulin hormone which causes increasing level of glucose in the blood, or it was called hyperglycemia. In Indonesia, DM is a serious disease on health because it can cause blindness, kidney disease, diabetic feet (gangrene), and stroke. The type of DM criteria can also be divided based on the main causes; they are DM type 1, type 2, and gestational. Diabetes type 1 or previously known as insulin-independent diabetes is due to a lack of production of insulin hormone. Diabetes type 2 or previously known as non-insulin dependent diabetes is due to ineffective use of insulin while gestational diabetes is a hyperglycemia that found during pregnancy. The most one type commonly found in patient is DM type 2. The main factors of this disease are genetic (A) and life style (E). Those disease with 2 factors can be constructed with additive genetic and unique environment (AE) model. In this article was discussed parameter estimation of AE model using Bayesian method and the inheritance character simulation on parent-offspring. On the AE model, there are response variable, predictor variables, and parameters were capable of representing the number of population on research. The population can be measured through a taken random sample. The response and predictor variables can be determined by sample while the parameters are unknown, so it was required to estimate the parameters based on the sample. Estimation of AE model parameters was obtained based on a joint posterior distribution. The simulation was conducted to get the value of genetic variance and life style variance. The results of simulation are 0.3600 for genetic variance and 0.0899 for life style variance. Therefore, the variance of genetic factor in DM type 2 is greater than life style.

Keywords: AE model, Bayesian method, diabetes mellitus type 2, genetic, life style

Procedia PDF Downloads 250
7972 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 367
7971 The Evolution of Spatio-Temporal Patterns of New-Type Urbanization in the Central Plains Economic Region in China

Authors: Sun fang, Zhang Wenxin

Abstract:

This paper establishes an evaluation index system for spatio-temporal patterns of urbanization, with the county as research unit. We use the Entropy Weight method, coefficient variance, the Theil index and ESDA-GIS to analyze spatial patterns and evolutionary characteristics of New-Type Urbanization in the Central Plains Economic Region (CPER) between 2000 and 2011. Results show that economic benefit, non-agricultural employment level and level of market development are the most important factors influencing the level of New-Type Urbanization in the CPER; overall regional differences in New-Type Urbanization have declined while spatial correlations have increased from 2000 to 2011. The overall spatial pattern has changed little, however; differences between the western and eastern areas of the CPER are clear, and the pattern of a strong west and weak east did not change significantly over the study period. Areas with high levels of New-Type Urbanization were mostly distributed along the Beijing-Guangzhou and LongHai Railways on both sides, a new influx of urbanization was tightly clustered around ZhengZhou in the Central Henan Urban Agglomeration, but this trend was found to be weakening slightly. The level of New-Type Urbanization in municipal districts was found to be much higher than it was in the county generally. Provincial borders experienced a lower rate of growth and a lower level of New-Type Urbanization than did any other areas, consistently forming clusters of cold spots and sub-cold spots. The analysis confirms that historical development, location, and diffusion effects of urban agglomeration are the main drivers of changes in New-Type Urbanization patterns in CPER.

Keywords: new-type urbanization, spatial pattern, central plains economic region, spatial evolution

Procedia PDF Downloads 269
7970 Numerical Study of Heat Release of the Symmetrically Arranged Extruded-Type Heat Sinks

Authors: Man Young Kim, Gyo Woo Lee

Abstract:

In this numerical study, we want to present the design of highly efficient extruded-type heat sink. The symmetrically arranged extruded-type heat sinks are used instead of a single extruded or swaged-type heat sink. In this parametric study, the maximum temperatures, the base temperatures between heaters, and the heat release rates were investigated with respect to the arrangements of heat sources, air flow rates, and amounts of heat input. Based on the results we believe that the use of both side of heat sink is to be much better for release the heat than the use of single side. Also from the results, it is believed that the symmetric arrangement of heat sources is recommended to achieve a higher heat transfer from the heat sink.

Keywords: heat sink, forced convection, heat transfer, performance evaluation, symmetrical arrangement

Procedia PDF Downloads 379
7969 Financial Inclusion for Inclusive Growth in an Emerging Economy

Authors: Godwin Chigozie Okpara, William Chimee Nwaoha

Abstract:

The paper set out to stress on how financial inclusion index could be calculated and also investigated the impact of inclusive finance on inclusive growth in an emerging economy. In the light of these objectives, chi-wins method was used to calculate indexes of financial inclusion while co-integration and error correction model were used for evaluation of the impact of financial inclusion on inclusive growth. The result of the analysis revealed that financial inclusion while having a long-run relationship with GDP growth is an insignificant function of the growth of the economy. The speed of adjustment is correctly signed and significant. On the basis of these results, the researchers called for tireless efforts of government and banking sector in promoting financial inclusion in developing countries.

Keywords: chi-wins index, co-integration, error correction model, financial inclusion

Procedia PDF Downloads 629
7968 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 167