Search results for: type i error
8167 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models
Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh
Abstract:
In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals
Procedia PDF Downloads 3018166 General Principles of Accident Prevention in Built Environment Rehabilitation
Authors: Alfredo Soeiro
Abstract:
Rehabilitation in construction or built environment is a particular type of operations when concerning prevention of accidents. In fact, it is also a different type of task in construction itself. Therefore, due to the complex characteristics of construction rehabilitation tasks and due to the intrinsic difficulty of preventing accidents in construction, a major challenge faces the responsibility for implementing adequate safety levels in this type of safety management. This paper addresses a set of proposed generic measures to face the unknown characteristics of built environment in terms of stability, materials and actual performance of buildings or other constructions. It is also addressed the necessary adaptation of preventive guidelines to this type of delicate refurbishing and renovating of existing facilities. Training, observation and reflective approaches are necessary to perform this safety management in the rehabilitation of built environment.Keywords: built environment, rehabilitation, construction safety, accident prevention, safety plan
Procedia PDF Downloads 2158165 Cellular Traffic Prediction through Multi-Layer Hybrid Network
Authors: Supriya H. S., Chandrakala B. M.
Abstract:
Deep learning based models have been recently successful adoption for network traffic prediction. However, training a deep learning model for various prediction tasks is considered one of the critical tasks due to various reasons. This research work develops Multi-Layer Hybrid Network (MLHN) for network traffic prediction and analysis; MLHN comprises the three distinctive networks for handling the different inputs for custom feature extraction. Furthermore, an optimized and efficient parameter-tuning algorithm is introduced to enhance parameter learning. MLHN is evaluated considering the “Big Data Challenge” dataset considering the Mean Absolute Error, Root Mean Square Error and R^2as metrics; furthermore, MLHN efficiency is proved through comparison with a state-of-art approach.Keywords: MLHN, network traffic prediction
Procedia PDF Downloads 878164 Characterizing the Geometry of Envy Human Behaviour Using Game Theory Model with Two Types of Homogeneous Players
Authors: A. S. Mousa, R. I. Rajab, A. A. Pinto
Abstract:
An envy behavioral game theoretical model with two types of homogeneous players is considered in this paper. The strategy space of each type of players is a discrete set with only two alternatives. The preferences of each type of players is given by a discrete utility function. All envy strategies that form Nash equilibria and the corresponding envy Nash domains for each type of players have been characterized. We use geometry to construct two dimensional envy tilings where the horizontal axis reflects the preference for players of type one, while the vertical axis reflects the preference for the players of type two. The influence of the envy behavior parameters on the Cartesian position of the equilibria has been studied, and in each envy tiling we determine the envy Nash equilibria. We observe that there are 1024 combinatorial classes of envy tilings generated from envy chromosomes: 256 of them are being structurally stable while 768 are with bifurcation. Finally, some conditions for the disparate envy Nash equilibria are stated.Keywords: game theory, Nash equilibrium, envy Nash behavior, geometric tilings, bifurcation thresholds
Procedia PDF Downloads 2268163 Evaluation of Cast-in-Situ Pile Condition Using Pile Integrity Test
Authors: Mohammad I. Hossain, Omar F. Hamim
Abstract:
This paper presents a case study on a pile integrity test for assessing the integrity of piles as well as a physical dimension (e.g., cross-sectional area, length), continuity, and consistency of the pile materials. The recent boom in the socio-economic condition of Bangladesh has given rise to the building of high-rise commercial and residential infrastructures. The advantage of the pile integrity test lies in the fact that it is possible to get an approximate indication regarding the quality of the sub-structure before commencing the construction of the super-structure. This paper aims at providing a classification of cast-in-situ piles based on characteristic reflectograms obtained using the Sonic Integrity Testing program for the sub-soil condition of Narayanganj, Bangladesh. The piles have been classified as 'Pile Type-1', 'Pile Type-2', 'Pile Type-3', 'Pile type-4', 'Pile Type-5' or 'Pile Type-6' from the visual observations of reflections from the generated stress waves by striking the pile head with a handheld hammer. With respect to construction quality and integrity, piles have been further classified into three distinct categories, i.e., satisfactory, may be satisfactory, and unsatisfactory.Keywords: cast-in-situ piles, characteristic reflectograms, pile integrity test, sonic integrity testing program
Procedia PDF Downloads 1138162 Profitability Assessment of Granite Aggregate Production and the Development of a Profit Assessment Model
Authors: Melodi Mbuyi Mata, Blessing Olamide Taiwo, Afolabi Ayodele David
Abstract:
The purpose of this research is to create empirical models for assessing the profitability of granite aggregate production in Akure, Ondo state aggregate quarries. In addition, an artificial neural network (ANN) model and multivariate predicting models for granite profitability were developed in the study. A formal survey questionnaire was used to collect data for the study. The data extracted from the case study mine for this study includes granite marketing operations, royalty, production costs, and mine production information. The following methods were used to achieve the goal of this study: descriptive statistics, MATLAB 2017, and SPSS16.0 software in analyzing and modeling the data collected from granite traders in the study areas. The ANN and Multi Variant Regression models' prediction accuracy was compared using a coefficient of determination (R²), Root mean square error (RMSE), and mean square error (MSE). Due to the high prediction error, the model evaluation indices revealed that the ANN model was suitable for predicting generated profit in a typical quarry. More quarries in Nigeria's southwest region and other geopolitical zones should be considered to improve ANN prediction accuracy.Keywords: national development, granite, profitability assessment, ANN models
Procedia PDF Downloads 978161 Improved Performance Scheme for Joint Transmission in Downlink Coordinated Multi-Point Transmission
Authors: Young-Su Ryu, Su-Hyun Jung, Myoung-Jin Kim, Hyoung-Kyu Song
Abstract:
In this paper, improved performance scheme for joint transmission is proposed in downlink (DL) coordinated multi-point(CoMP) in case of constraint transmission power. This scheme is that serving transmission point (TP) request a joint transmission to inter-TP and selects one pre-coding technique according to channel state information(CSI) from user equipment(UE). The simulation results show that the bit error rate(BER) and throughput performances of the proposed scheme provide high spectral efficiency and reliable data at the cell edge.Keywords: CoMP, joint transmission, minimum mean square error, zero-forcing, zero-forcing dirty paper coding
Procedia PDF Downloads 5508160 Suggestion of Reasonable Analysis Model for T-Girder Modular Bridge
Authors: Soonwon Kang, Jinwoong Choi, Sungnam Hong, Seung-Kyung Kye, Sun-Kyu Park
Abstract:
The modular bridge is to be constructed by assembling standardized precast segments. This bridge is classified as a slab type and T-girder type. The T-girder bridge has transverse joint. However, it did not perform the verification on the transverse joint, but the slab type was done on the analytic study on the joint. Therefore, it is necessary for precast modular T-girder bridge that has a transverse joint to propose an appropriated model. In this study, specimens and analysis models compared integrated type with segmented type. Results of the integrated and segmented specimens, each of the deflection was 98.40mm and 74.66mm when the maximum load was 269.71kN and 248.29kN, in case of the modeling the specimens, each of the deflection was 84.04mm, 69.39mm when the maximum load was 269.71kN, 248.29kN, therefore, the precast T-girder modular bridges form the analytic model proposed appropriate.Keywords: precast, T-girder modular bridge, finite element analysis, joint
Procedia PDF Downloads 4148159 Role of ABC-Type Efflux Transporters in Antifungal Resistance of Candida auris
Authors: Mohamed Mahdi Alshahni, Takashi Tamura, Koichi Makimura
Abstract:
Objective: The objective of this study is to evaluate roles of ABC-type efflux transporters in the resistance of Candida auris against common antifungal agents. Material and Methods: A wild-type C. auris strain and its antifungal resistant derivative strain that is generated through induction by antifungal agents were used in this study. The strains were cultured onto media containing beauvericin alone or in combination with azole agents. Moreover, expression levels of four ABC-type transporter’s homologs in those strains were analyzed by real time PCR with or without antifungal stress by fluconazole or voriconazole. Results: Addition of beauvericin helped to partially restore the susceptibility of the resistant strain against fluconazole, suggesting participation of ABC-type transporters in the resistance mechanism. Real time PCR results showed that mRNA levels of three out of the four analyzed transporters in the resistant strain were more than 2-fold higher than their counterparts in the wild-type strain under negative control and antifungal agent-containing conditions. Conclusion: C. auris is an emerging multidrug-resistant pathogen causing human mortality worldwide. Providing effective treatment has been hampered by the resistance to antifungal drugs, demanding understanding the resistance mechanism in order to devise new therapeutic strategies. Our data suggest a partial contribution of ABC-type transporters to the resistance of this pathogen.Keywords: resistance, C. auris, transporters, antifungi
Procedia PDF Downloads 1678158 Feasibility Study on Hybrid Multi-Stage Direct-Drive Generator for Large-Scale Wind Turbine
Authors: Jin Uk Han, Hye Won Han, Hyo Lim Kang, Tae An Kim, Seung Ho Han
Abstract:
Direct-drive generators for large-scale wind turbine, which are divided into AFPM(Axial Flux Permanent Magnet) and RFPM(Radial Flux Permanent Magnet) type machine, have attracted interest because of a higher energy density in comparison with gear train type generators. Each type of the machines provides distinguishable geometrical features such as narrow width with a large diameter for the AFPM-type machine and wide width with a certain diameter for the RFPM-type machine. When the AFPM-type machine is applied, an increase of electric power production through a multi-stage arrangement in axial direction is easily achieved. On the other hand, the RFPM-type machine can be applied by using its geometric feature of wide width. In this study, a hybrid two-stage direct-drive generator for 6.2MW class wind turbine was proposed, in which the two-stage AFPM-type machine for 5 MW was composed of two models arranged in axial direction with a hollow shape topology of the rotor with annular disc, the stator and the main shaft mounted on coupled slew bearings. In addition, the RFPM-type machine for 1.2MW was installed at the empty space of the rotor. Analytic results obtained from an electro-magnetic and structural interaction analysis showed that the structural weight of the proposed hybrid two-stage direct-drive generator can be achieved as 155tonf in a condition satisfying the requirements of structural behaviors such as allowable air-gap clearance and strength. Therefore, it was sure that the 6.2MW hybrid two-stage direct-drive generator is competitive than conventional generators. (NRF grant funded by the Korea government MEST, No. 2017R1A2B4005405).Keywords: AFPM-type machine, direct-drive generator, electro-magnetic analysis, large-scale wind turbine, RFPM-type machine
Procedia PDF Downloads 1668157 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique
Authors: Sahar Tabarroki, Ahad Nazari
Abstract:
The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.Keywords: architectural design, design error, risk management, risk factor
Procedia PDF Downloads 1298156 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation
Authors: Hangsik Shin
Abstract:
The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.Keywords: peak detection, photoplethysmography, sampling, signal reconstruction
Procedia PDF Downloads 3658155 Energetic and Exergetic Evaluation of Box-Type Solar Cookers Using Different Insulation Materials
Authors: A. K. Areamu, J. C. Igbeka
Abstract:
The performance of box-type solar cookers has been reported by several researchers but little attention was paid to the effect of the type of insulation material on the energy and exergy efficiency of these cookers. This research aimed at evaluating the energy and exergy efficiencies of the box-type cookers containing different insulation materials. Energy and exergy efficiencies of five box-type solar cookers insulated with maize cob, air (control), maize husk, coconut coir and polyurethane foam respectively were obtained over a period of three years. The cookers were evaluated using water heating test procedures in determining the energy and exergy analysis. The results were subjected to statistical analysis using ANOVA. The result shows that the average energy input for the five solar cookers were: 245.5, 252.2, 248.7, 241.5 and 245.5J respectively while their respective average energy losses were: 201.2, 212.7, 208.4, 189.1 and 199.8J. The average exergy input for five cookers were: 228.2, 234.4, 231.1, 224.4 and 228.2J respectively while their respective average exergy losses were: 223.4, 230.6, 226.9, 218.9 and 223.0J. The energy and exergy efficiency was highest in the cooker with coconut coir (37.35 and 3.90% respectively) in the first year but was lowest for air (11 and 1.07% respectively) in the third year. Statistical analysis showed significant difference between the energy and exergy efficiencies over the years. These results reiterate the importance of a good insulating material for a box-type solar cooker.Keywords: efficiency, energy, exergy, heating insolation
Procedia PDF Downloads 3668154 Maximum Initial Input Allowed to Iterative Learning Control Set-up Using Singular Values
Authors: Naser Alajmi, Ali Alobaidly, Mubarak Alhajri, Salem Salamah, Muhammad Alsubaie
Abstract:
Iterative Learning Control (ILC) known to be a controlling tool to overcome periodic disturbances for repetitive systems. This technique is required to let the error signal tends to zero as the number of operation increases. The learning process that lies within this context is strongly dependent on the initial input which if selected properly tends to let the learning process be more effective compared to the case where a system starts from blind. ILC uses previous recorded execution data to update the following execution/trial input such that a reference trajectory is followed to a high accuracy. Error convergence in ILC is generally highly dependent on the input applied to a plant for trial $1$, thus a good choice of initial starting input signal would make learning faster and as a consequence the error tends to zero faster as well. In the work presented within, an upper limit based on the Singular Values Principle (SV) is derived for the initial input signal applied at trial $1$ such that the system follow the reference in less number of trials without responding aggressively or exceeding the working envelope where a system is required to move within in a robot arm, for example. Simulation results presented illustrate the theory introduced within this paper.Keywords: initial input, iterative learning control, maximum input, singular values
Procedia PDF Downloads 2408153 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z
Authors: Catarina Cruz, Ana Breda
Abstract:
Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings
Procedia PDF Downloads 1598152 Assessment of Time-variant Work Stress for Human Error Prevention
Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee
Abstract:
For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention
Procedia PDF Downloads 6708151 Banking Sector Development and Economic Growth: Evidence from the State of Qatar
Authors: Fekri Shawtari
Abstract:
The banking sector plays a very crucial role in the economic development of the country. As a financial intermediary, it has assigned a great role in the economic growth and stability. This paper aims to examine the empirically the relationship between banking industry and economic growth in state of Qatar. We adopt the VAR vector error correction model (VECM) along with Granger causality to address the issue over the long-run and short-run between the banking sector and economic growth. It is expected that the results will give policy directions to the policymakers to make strategies that are conducive toward boosting development to achieve the targeted economic growth in current situation.Keywords: economic growth, banking sector, Qatar, vector error correction model, VECM
Procedia PDF Downloads 1688150 Virtual Assessment of Measurement Error in the Fractional Flow Reserve
Authors: Keltoum Chahour, Mickael Binois
Abstract:
Due to a lack of standardization during the invasive fractional flow reserve (FFR) procedure, the index is subject to many sources of uncertainties. In this paper, we investigate -through simulation- the effect of the (FFR) device position and configuration on the obtained value of the (FFR) fraction. For this purpose, we use computational fluid dynamics (CFD) in a 3D domain corresponding to a diseased arterial portion. The (FFR) pressure captor is introduced inside it with a given length and coefficient of bending to capture the (FFR) value. To get over the computational limitations, basically, the time of the simulation is about 2h 15min for one (FFR) value; we generate a Gaussian Process (GP) model for (FFR) prediction. The (GP) model indicates good accuracy and demonstrates the effective error in the measurement created by the random configuration of the pressure captor.Keywords: fractional flow reserve, Gaussian processes, computational fluid dynamics, drift
Procedia PDF Downloads 1328149 Estimation of Population Mean under Random Non-Response in Two-Occasion Successive Sampling
Authors: M. Khalid, G. N. Singh
Abstract:
In this paper, we have considered the problems of estimation for the population mean on current (second) occasion in two-occasion successive sampling under random non-response situations. Some modified exponential type estimators have been proposed and their properties are studied under the assumptions that the number of sampling unit follows a discrete distribution due to random non-response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.Keywords: modified exponential estimator, successive sampling, random non-response, auxiliary variable, bias, mean square error
Procedia PDF Downloads 3498148 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 888147 Correlation between Microalbuminuria and Hypertension in Type 2 Diabetic Patients
Authors: Alia Ali, Azeem Taj, Muhammed Joher Amin, Farrukh Iqbal, Zafar Iqbal
Abstract:
Background: Hypertension is commonly found in patients with Diabetic Kidney Disease (DKD). Microalbuminuria is the first clinical sign of involvement of kidneys in patients with type 2 diabetes. Uncontrolled hypertension induces a higher risk of cardiovascular events, including death, increasing proteinuria and progression to kidney disease. Objectives: To determine the correlation between microalbuminuria and hypertension and their association with other risk factors in type 2 diabetic patients. Methods: One hundred and thirteen type 2 diabetic patients were screened for microalbuminuria and raised blood pressure, attending the diabetic clinic of Shaikh Zayed Hospital, Lahore, Pakistan. The study was conducted from November 2012 to June 2013. Results: Patients were divided into two groups. Group 1, those with normoalbuminuria (n=63) and Group 2, those having microalbuminuria (n=50). Group 2 patients showed higher blood pressure values as compared to Group 1. The results were statistically significant and showed poor glycemic control as a contributing risk factor. Conclusion: The study concluded that there is high frequency of hypertension among type 2 diabetics but still much higher among those having microalbuminuria. So, early recognition of renal dysfunction through detection of microalbuminuria and to start treatment without any delay will confer future protection from end-stage renal disease as well as hypertension and its complications in type 2 diabetic patients.Keywords: hypertension, microalbuminuria, diabetic kidney disease, type 2 Diabetes mellitus
Procedia PDF Downloads 3958146 Financial Inclusion for Inclusive Growth in an Emerging Economy
Authors: Godwin Chigozie Okpara, William Chimee Nwaoha
Abstract:
The paper set out to stress on how financial inclusion index could be calculated and also investigated the impact of inclusive finance on inclusive growth in an emerging economy. In the light of these objectives, chi-wins method was used to calculate indexes of financial inclusion while co-integration and error correction model were used for evaluation of the impact of financial inclusion on inclusive growth. The result of the analysis revealed that financial inclusion while having a long-run relationship with GDP growth is an insignificant function of the growth of the economy. The speed of adjustment is correctly signed and significant. On the basis of these results, the researchers called for tireless efforts of government and banking sector in promoting financial inclusion in developing countries.Keywords: chi-wins index, co-integration, error correction model, financial inclusion
Procedia PDF Downloads 6518145 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 3998144 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini
Abstract:
A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.Keywords: central Italy, extreme events, rainfall data, underestimation errors
Procedia PDF Downloads 1898143 MCERTL: Mutation-Based Correction Engine for Register-Transfer Level Designs
Authors: Khaled Salah
Abstract:
In this paper, we present MCERTL (mutation-based correction engine for RTL designs) as an automatic error correction technique based on mutation analysis. A mutation-based correction methodology is proposed to automatically fix the erroneous RTL designs. The proposed strategy combines the processes of mutation and assertion-based localization. The erroneous statements are mutated to produce possible fixes for the failed RTL code. A concurrent mutation engine is proposed to mitigate the computational cost of running sequential mutants operators. The proposed methodology is evaluated against some benchmarks. The experimental results demonstrate that our proposed method enables us to automatically locate and correct multiple bugs at reasonable time.Keywords: bug localization, error correction, mutation, mutants
Procedia PDF Downloads 2798142 An Application of Modified M-out-of-N Bootstrap Method to Heavy-Tailed Distributions
Authors: Hannah F. Opayinka, Adedayo A. Adepoju
Abstract:
This study is an extension of a prior study on the modification of the existing m-out-of-n (moon) bootstrap method for heavy-tailed distributions in which modified m-out-of-n (mmoon) was proposed as an alternative method to the existing moon technique. In this study, both moon and mmoon techniques were applied to two real income datasets which followed Lognormal and Pareto distributions respectively with finite variances. The performances of these two techniques were compared using Standard Error (SE) and Root Mean Square Error (RMSE). The findings showed that mmoon outperformed moon bootstrap in terms of smaller SEs and RMSEs for all the sample sizes considered in the two datasets.Keywords: Bootstrap, income data, lognormal distribution, Pareto distribution
Procedia PDF Downloads 1858141 The Effect of Iron Deficiency on the Magnetic Properties of Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ M-Type Hexaferrites
Authors: Kang-Hyuk Lee, Wei Yan, Sang-Im Yoo
Abstract:
Recently, Ca₁₋ₓLaₓFe₁₂O₁₉ (Ca-La M-type) hexaferrites have been reported to possess higher crystalline anisotropy compared with SrFe₁₂O₁₉ (Sr M-type) hexaferrite without reducing its saturation magnetization (Ms), resulting in higher coercivity (Hc). While iron deficiency is known to be helpful for the growth and the formation of NiZn spinel ferrites, the effect of iron deficiency in Ca-La M-type hexaferrites has never been reported yet. In this study, therefore, we tried to investigate the effect of iron deficiency on the magnetic properties of Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ hexaferrites prepared by solid state reaction. As-calcined powder was pressed into pellets and sintered at 1275~1325℃ for 4 h in air. Samples were characterized by powder X-ray diffraction (XRD), vibrating sample magnetometer (VSM), and scanning electron microscope (SEM). Powder XRD analyses revealed that Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ (0.75 ≦ y ≦ 2.15) ferrites calcined at 1250-1300℃ for 12 h in air were composed of single phase without the second phases. With increasing the iron deficiency, y, the lattice parameters a, c and unite cell volumes were decreased first up to y=10.25 and then increased again. The highest Ms value of 77.5 emu/g was obtainable from the sample of Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ sintered at 1300℃ for 4 h in air. Detailed microstructures and magnetic properties of Ca-La M-type hexagonal ferrites will be presented for a discussionKeywords: Ca-La M-type hexaferrite, magnetic properties, iron deficiency, hexaferrite
Procedia PDF Downloads 4588140 Block Implicit Adams Type Algorithms for Solution of First Order Differential Equation
Authors: Asabe Ahmad Tijani, Y. A. Yahaya
Abstract:
The paper considers the derivation of implicit Adams-Moulton type method, with k=4 and 5. We adopted the method of interpolation and collocation of power series approximation to generate the continuous formula which was evaluated at off-grid and some grid points within the step length to generate the proposed block schemes, the schemes were investigated and found to be consistent and zero stable. Finally, the methods were tested with numerical experiments to ascertain their level of accuracy.Keywords: Adam-Moulton Type (AMT), off-grid, block method, consistent and zero stable
Procedia PDF Downloads 4808139 Selection of Rayleigh Damping Coefficients for Seismic Response Analysis of Soil Layers
Authors: Huai-Feng Wang, Meng-Lin Lou, Ru-Lin Zhang
Abstract:
One good analysis method in seismic response analysis is direct time integration, which widely adopts Rayleigh damping. An approach is presented for selection of Rayleigh damping coefficients to be used in seismic analyses to produce a response that is consistent with Modal damping response. In the presented approach, the expression of the error of peak response, acquired through complete quadratic combination method, and Rayleigh damping coefficients was set up and then the coefficients were produced by minimizing the error. Two finite element modes of soil layers, excited by 28 seismic waves, were used to demonstrate the feasibility and validity.Keywords: Rayleigh damping, modal damping, damping coefficients, seismic response analysis
Procedia PDF Downloads 4368138 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria
Authors: Isaac Kayode Ogunlade
Abstract:
Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device
Procedia PDF Downloads 88