Search results for: participatory error correction process
16735 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models
Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh
Abstract:
In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals
Procedia PDF Downloads 30216734 Prediction of Formation Pressure Using Artificial Intelligence Techniques
Authors: Abdulmalek Ahmed
Abstract:
Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)
Procedia PDF Downloads 14916733 Usage the Point Analysis Algorithm (SANN) on Drought Analysis
Authors: Khosro Shafie Motlaghi, Amir Reza Salemian
Abstract:
In arid and semi-arid regions like our country Evapotranspiration is the greatestportion of water resource. Therefor knowlege of its changing and other climate parameters plays an important role for planning, development, and management of water resource. In this search the Trend of long changing of Evapotranspiration (ET0), average temprature, monthly rainfall were tested. To dose, all synoptic station s in iran were divided according to the climate with Domarton climate. The present research was done in semi-arid climate of Iran, and in which 14 synoptic with 30 years period of statistics were investigated with 3 methods of minimum square error, Mann Kendoll, and Vald-Volfoytz Evapotranspiration was calculated by using the method of FAO-Penman. The results of investigation in periods of statistic has shown that the process Evapotranspiration parameter of 24 percent of stations is positive, and for 2 percent is negative, and for 47 percent. It was without any Trend. Similary for 22 percent of stations was positive the Trend of parameter of temperature for 19 percent , the trend was negative and for 64 percent, it was without any Trend. The results of rainfall trend has shown that the amount of rainfall in most stations was not considered as a meaningful trend. The result of Mann-kendoll method similar to minimum square error method. regarding the acquired result was can admit that in future years Some regions will face increase of temperature and Evapotranspiration.Keywords: analysis, algorithm, SANN, ET0
Procedia PDF Downloads 29616732 The Internationalization of Capital Market Influencing Debt Sustainability's Impact on the Growth of the Nigerian Economy
Authors: Godwin Chigozie Okpara, Eugine Iheanacho
Abstract:
The paper set out to assess the sustainability of debt in the Nigerian economy. Precisely, it sought to determine the level of debt sustainability and its impact on the growth of the economy; whether internationalization of capital market has positively influenced debt sustainability’s impact on economic growth; and to ascertain the direction of causality between external debt sustainability and the growth of GDP. In the light of these objectives, ratio analysis was employed for the determination of debt sustainability. Our findings revealed that the periods 1986 – 1994 and 1999 – 2004 were periods of severe unsustainable borrowing. The unit root test showed that the variables of the growth model were integrated of order one, I(1) and the cointegration test provided evidence for long run stability. Considering the dawn of internationalization of capital market, the researcher employed the structural break approach using Chow Breakpoint test on the vector error correction model (VECM). The result of VECM showed that debt sustainability, measured by debt to GDP ratio exerts negative and significant impact on the growth of the economy while debt burden measured by debt-export ratio and debt service export ratio are negative though insignificant on the growth of GDP. The Cho test result indicated that internationalization of capital market has no significant effect on the debt overhang impact on the growth of the Economy. The granger causality test indicates a feedback effect from economic growth to debt sustainability growth indicators. On the bases of these findings, the researchers made some necessary recommendations which if followed religiously will go a long way to ameliorating debt burdens and engendering economic growth.Keywords: debt sustainability, internalization, capital market, cointegration, chow test
Procedia PDF Downloads 43716731 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Turkey: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests no effects of the CO2 emissions and energy use on the GDP in Turkey. There exists a short-run bidirectional relationship between the electricity and natural gas consumption, and also there is a negative unidirectional causality running from the GDP to electricity use. Overall, the results partly support arguments that there are relationships between energy use and economic output; however, the effects may differ due to the source of energy such as in the case of Turkey for the period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis
Procedia PDF Downloads 50416730 Baseline Study of Water Quality in Indonesia Using Dynamic Methods and Technologies
Authors: R. L. P. de Lima, F. C. B. Boogaard, D. Setyo Rini, P. Arisandi, R. E. de Graaf-Van Dinther
Abstract:
Water quality in many Asian countries is very poor due to inefficient solid waste management, high population growth and the lack of sewage and purification systems for households and industry. A consortium of Indonesian and Dutch organizations has begun a large-scale international research project to evaluate and propose solutions to face the surface water pollution challenges in Brantas Basin, Indonesia (East Java: Malang / Surabaya). The first phase of the project consisted in a baseline study to assess the current status of surface water bodies and to determine the ambitions and strategies among local stakeholders. This study was conducted with high participatory / collaborative and knowledge sharing objectives. Several methods such as using mobile sensors (attached to boats or underwater drones), test strips and mobile apps, bio-monitoring (sediments), ecology scans using underwater cameras, or continuous / static measurements, were applied in different locations in the regions of the basin, at multiple locations within the water systems (e.g. spring, upstream / downstream of industry and urban areas, mouth of the Surabaya River, groundwater). Results gave an indication of (reference) values of basic water quality parameters such as turbidity, electrical conductivity, dissolved oxygen or nutrients (ammonium / nitrate). An important outcome was that collecting random samples may not be representative of a body of water, given that water quality parameters can vary widely in space (x, y, and depth) and time (day / night and seasonal). Innovative / dynamic monitoring methods (e.g. underwater drones, sensors on boats) can contribute to better understand the quality of the living environment (water, ecology, sediment) and factors that affect it. The field work activities, in particular, underwater drones, revealed potential as awareness actions as they attracted interest from locals and local press. This baseline study involved the cooperation with local managing organizations with Dutch partners, and their willingness to work together is important to ensure participatory actions and social awareness regarding the process of adaptation and strengthening of regulations, or for the construction of facilities such as sewage.Keywords: water quality monitoring, pollution, underwater drones, social awareness
Procedia PDF Downloads 19216729 Cellular Traffic Prediction through Multi-Layer Hybrid Network
Authors: Supriya H. S., Chandrakala B. M.
Abstract:
Deep learning based models have been recently successful adoption for network traffic prediction. However, training a deep learning model for various prediction tasks is considered one of the critical tasks due to various reasons. This research work develops Multi-Layer Hybrid Network (MLHN) for network traffic prediction and analysis; MLHN comprises the three distinctive networks for handling the different inputs for custom feature extraction. Furthermore, an optimized and efficient parameter-tuning algorithm is introduced to enhance parameter learning. MLHN is evaluated considering the “Big Data Challenge” dataset considering the Mean Absolute Error, Root Mean Square Error and R^2as metrics; furthermore, MLHN efficiency is proved through comparison with a state-of-art approach.Keywords: MLHN, network traffic prediction
Procedia PDF Downloads 8916728 Error Analysis of Pronunciation of French by Sinhala Speaking Learners
Authors: Chandeera Gunawardena
Abstract:
The present research analyzes the pronunciation errors encountered by thirty Sinhala speaking learners of French on the assumption that the pronunciation errors were systematic and they reflect the interference of the native language of the learners. The thirty participants were selected using random sampling method. By the time of the study, the subjects were studying French as a foreign language for their Bachelor of Arts Degree at University of Kelaniya, Sri Lanka. The participants were from a homogenous linguistics background. All participants speak the same native language (Sinhala) thus they had completed their secondary education in Sinhala medium and during which they had also learnt French as a foreign language. A battery operated audio tape recorder and a 120-minute blank cassettes were used for recording. A list comprised of 60 words representing all French phonemes was used to diagnose pronunciation difficulties. Before the recording process commenced, the subjects were requested to familiarize themselves with the words through reading them several times. The recording was conducted individually in a quiet classroom and each recording approximately took fifteen minutes. Each subject was required to read at a normal speed. After the completion of recording, the recordings were replayed to identify common errors which were immediately transcribed using the International Phonetic Alphabet. Results show that Sinhala speaking learners face problems with French nasal vowels and French initial consonants clusters. The learners also exhibit errors which occur because of their second language (English) interference.Keywords: error analysis, pronunciation difficulties, pronunciation errors, Sinhala speaking learners of French
Procedia PDF Downloads 21016727 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning
Authors: Michael A. Sprayberry, Vincent C. Paquit
Abstract:
Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization
Procedia PDF Downloads 9116726 Profitability Assessment of Granite Aggregate Production and the Development of a Profit Assessment Model
Authors: Melodi Mbuyi Mata, Blessing Olamide Taiwo, Afolabi Ayodele David
Abstract:
The purpose of this research is to create empirical models for assessing the profitability of granite aggregate production in Akure, Ondo state aggregate quarries. In addition, an artificial neural network (ANN) model and multivariate predicting models for granite profitability were developed in the study. A formal survey questionnaire was used to collect data for the study. The data extracted from the case study mine for this study includes granite marketing operations, royalty, production costs, and mine production information. The following methods were used to achieve the goal of this study: descriptive statistics, MATLAB 2017, and SPSS16.0 software in analyzing and modeling the data collected from granite traders in the study areas. The ANN and Multi Variant Regression models' prediction accuracy was compared using a coefficient of determination (R²), Root mean square error (RMSE), and mean square error (MSE). Due to the high prediction error, the model evaluation indices revealed that the ANN model was suitable for predicting generated profit in a typical quarry. More quarries in Nigeria's southwest region and other geopolitical zones should be considered to improve ANN prediction accuracy.Keywords: national development, granite, profitability assessment, ANN models
Procedia PDF Downloads 10116725 Improved Performance Scheme for Joint Transmission in Downlink Coordinated Multi-Point Transmission
Authors: Young-Su Ryu, Su-Hyun Jung, Myoung-Jin Kim, Hyoung-Kyu Song
Abstract:
In this paper, improved performance scheme for joint transmission is proposed in downlink (DL) coordinated multi-point(CoMP) in case of constraint transmission power. This scheme is that serving transmission point (TP) request a joint transmission to inter-TP and selects one pre-coding technique according to channel state information(CSI) from user equipment(UE). The simulation results show that the bit error rate(BER) and throughput performances of the proposed scheme provide high spectral efficiency and reliable data at the cell edge.Keywords: CoMP, joint transmission, minimum mean square error, zero-forcing, zero-forcing dirty paper coding
Procedia PDF Downloads 55316724 Study of Skid-Mounted Natural Gas Treatment Process
Authors: Di Han, Lingfeng Li
Abstract:
Selection of low-temperature separation dehydration and dehydrochlorination process applicable to skid design, using Hysys software to simulate the low-temperature separation dehydration and dehydrochlorination process under different refrigeration modes, focusing on comparing the refrigeration effect of different refrigeration modes, the condensation amount of hydrocarbon liquids and alcoholic wastewater, as well as the adaptability of the process, and determining the low-temperature separation process applicable to the natural gas dehydration and dehydrochlorination skid into the design of skid; and finally, to carry out the CNG recycling process calculations of the processed qualified natural gas and to determine the dehydration scheme and the key parameters of the compression process.Keywords: skidding, dehydration and dehydrochlorination, cryogenic separation process, CNG recovery process calculations
Procedia PDF Downloads 14216723 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 14116722 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation
Authors: Hangsik Shin
Abstract:
The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.Keywords: peak detection, photoplethysmography, sampling, signal reconstruction
Procedia PDF Downloads 36816721 Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter
Authors: Jisun Lee, Jay Hyoun Kwon
Abstract:
As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results.Keywords: Extended Kalman Filter, geophysical database referenced navigation, gravity gradient, terrain
Procedia PDF Downloads 34916720 The Influence of Different Flux Patterns on Magnetic Losses in Electric Machine Cores
Authors: Natheer Alatawneh
Abstract:
The finite element analysis of magnetic fields in electromagnetic devices shows that the machine cores experience different flux patterns including alternating and rotating fields. The rotating fields are generated in different configurations range between circular and elliptical with different ratios between the major and minor axis of the flux locus. Experimental measurements on electrical steel exposed to different flux patterns disclose different magnetic losses in the samples under test. Consequently, electric machines require special attention during the cores loss calculation process to consider the flux patterns. In this study, a circular rotational single sheet tester is employed to measure the core losses in electric steel sample of M36G29. The sample was exposed to alternating field, circular field, and elliptical fields with axis ratios of 0.2, 0.4, 0.6 and 0.8. The measured data was implemented on 6-4 switched reluctance motor at three different frequencies of interest to the industry as 60 Hz, 400 Hz, and 1 kHz. The results disclose a high margin of error that may occur during the loss calculations if the flux patterns issue is neglected. The error in different parts of the machine associated with considering the flux patterns can be around 50%, 10%, and 2% at 60Hz, 400Hz, and 1 kHz, respectively. The future work will focus on the optimization of machine geometrical shape which has a primary effect on the flux pattern in order to minimize the magnetic losses in machine cores.Keywords: alternating core losses, electric machines, finite element analysis, rotational core losses
Procedia PDF Downloads 25216719 A Novel Machining Method and Tool-Path Generation for Bent Mandrel
Authors: Hong Lu, Yongquan Zhang, Wei Fan, Xiangang Su
Abstract:
Bent mandrel has been widely used as precise mould in automobile industry, shipping industry and aviation industry. To improve the versatility and efficiency of turning method of bent mandrel with fixed rotational center, an instantaneous machining model based on cutting parameters and machine dimension is prospered in this paper. The spiral-like tool path generation approach in non-axisymmetric turning process of bent mandrel is developed as well to deal with the error of part-to-part repeatability in existed turning model. The actual cutter-location points are calculated by cutter-contact points, which are obtained from the approach of spiral sweep process using equal-arc-length segment principle in polar coordinate system. The tool offset is set to avoid the interference between tool and work piece is also considered in the machining model. Depend on the spindle rotational angle, synchronization control of X-axis, Z-axis and C-axis is adopted to generate the tool-path of the turning process. The simulation method is developed to generate NC program according to the presented model, which includes calculation of cutter-location points and generation of tool-path of cutting process. With the approach of a bent mandrel taken as an example, the maximum offset of center axis is 4mm in the 3D space. Experiment results verify that the machining model and turning method are appropriate for the characteristics of bent mandrel.Keywords: bent mandrel, instantaneous machining model, simulation method, tool-path generation
Procedia PDF Downloads 33616718 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z
Authors: Catarina Cruz, Ana Breda
Abstract:
Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings
Procedia PDF Downloads 16016717 Assessment of Time-variant Work Stress for Human Error Prevention
Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee
Abstract:
For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention
Procedia PDF Downloads 67316716 Study on Accurate Calculation Method of Model Attidude on Wind Tunnel Test
Authors: Jinjun Jiang, Lianzhong Chen, Rui Xu
Abstract:
The accurate of model attitude angel plays an important role on the aerodynamic test results in the wind tunnel test. The original method applies the spherical coordinate system transformation to obtain attitude angel calculation.The model attitude angel is obtained by coordinate transformation and spherical surface mapping applying the nominal attitude angel (the balance attitude angel in the wind tunnel coordinate system) indicated by the mechanism. First, the coordinate transformation of this method is not only complex but also difficult to establish the transformed relationship between the space coordinate systems especially after many steps of coordinate transformation, moreover it cannot realize the iterative calculation of the interference relationship between attitude angels; Second, during the calculate process to solve the problem the arc is approximately used to replace the straight line, the angel for the tangent value, and the inverse trigonometric function is applied. Therefore, in the calculation of attitude angel, the process is complex and inaccurate, which can be solved approximately when calculating small attack angel. However, with the advancing development of modern aerodynamic unsteady research, the aircraft tends to develop high or super large attack angel and unsteadyresearch field.According to engineering practice and vector theory, the concept of vector angel coordinate systemis proposed for the first time, and the vector angel coordinate system of attitude angel is established.With the iterative correction calculation and avoiding the problem of approximate and inverse trigonometric function solution, the model attitude calculation process is carried out in detail, which validates that the calculation accuracy and accuracy of model attitude angels are improved.Based on engineering and theoretical methods, a vector angel coordinate systemis established for the first time, which gives the transformation and angel definition relations between different flight attitude coordinate systems, that can accurately calculate the attitude angel of the corresponding coordinate systemand determine its direction, especially in the channel coupling calculation, the calculation of the attitude angel between the coordinate systems is only related to the angel, and has nothing to do with the change order s of the coordinate system, whichsimplifies the calculation process.Keywords: attitude angel, angel vector coordinate system, iterative calculation, spherical coordinate system, wind tunnel test
Procedia PDF Downloads 14616715 Protective Effect of Levetiracetam on Aggravation of Memory Impairment in Temporal Lobe Epilepsy by Phenytoin
Authors: Asher John Mohan, Krishna K. L.
Abstract:
Objectives: (1) To assess the extent of memory impairment induced by Phenytoin (PHT) at normal and reduced dose on temporal lobe epileptic mice. (2) To evaluate the protective effect of Levetiracetam (LEV) on aggravation of memory impairment in temporal lobe epileptic mice by PHT. Materials and Methods: Albino mice of either sex (n=36) were used for the study for a period of 64 days. Convulsions were induced by intraperitoneal administration of pilocarpine 280 mg/kg on every 6th day. Radial arm maze (RAM) was employed to evaluate the memory impairment activity on every 7th day. The anticonvulsant and memory impairment activity were assessed in PHT normal and reduced doses both alone and in combination with LEV. RAM error scores and convulsive scores were the parameters considered for this study. Brain acetylcholine esterase and glutamate were determined along with histopathological studies of frontal cortex. Results: Administration of PHT for 64 days on mice has shown aggravation of memory impairment activity on temporal lobe epileptic mice. Although the reduction in PHT dose was found to decrease the degree of memory impairment the same decreased the anticonvulsant potency. The combination with LEV not only brought about the correction of impaired memory but also replaced the loss of potency due to the reduction of the dose of the antiepileptic drug employed. These findings were confirmed with enzyme and neurotransmitter levels in addition to histopathological studies. Conclusion: This study thus builds a foundation in combining a nootropic anticonvulsant with an antiepileptic drug to curb the adverse effect of memory impairment associated with temporal lobe epilepsy. However further extensive research is a must for the practical incorporation of this approach into disease therapy.Keywords: anti-epileptic drug, Phenytoin, memory impairment, Pilocarpine
Procedia PDF Downloads 31616714 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 9116713 Methods for Business Process Simulation Based on Petri Nets
Authors: K. Shoylekova, K. Grigorova
Abstract:
The Petri nets are the first standard for business process modeling. Most probably, it is one of the core reasons why all new standards created afterwards have to be so reformed as to reach the stage of mapping the new standard onto Petri nets. The paper presents a Business process repository based on a universal database. The repository provides the possibility the data about a given process to be stored in three different ways. Business process repository is developed with regard to the reformation of a given model to a Petri net in order to be easily simulated two different techniques for business process simulation based on Petri nets - Yasper and Woflan are discussed. Their advantages and drawbacks are outlined. The way of simulating business process models, stored in the Business process repository is shown.Keywords: business process repository, petri nets, simulation, Woflan, Yasper
Procedia PDF Downloads 37016712 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini
Abstract:
A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.Keywords: central Italy, extreme events, rainfall data, underestimation errors
Procedia PDF Downloads 19116711 Process Capability Analysis by Using Statistical Process Control of Rice Polished Cylinder Turning Practice
Authors: S. Bangphan, P. Bangphan, T.Boonkang
Abstract:
Quality control helps industries in improvements of its product quality and productivity. Statistical Process Control (SPC) is one of the tools to control the quality of products that turning practice in bringing a department of industrial engineering process under control. In this research, the process control of a turning manufactured at workshops machines. The varying measurements have been recorded for a number of samples of a rice polished cylinder obtained from a number of trials with the turning practice. SPC technique has been adopted by the process is finally brought under control and process capability is improved.Keywords: rice polished cylinder, statistical process control, control charts, process capability
Procedia PDF Downloads 48916710 An Application of Modified M-out-of-N Bootstrap Method to Heavy-Tailed Distributions
Authors: Hannah F. Opayinka, Adedayo A. Adepoju
Abstract:
This study is an extension of a prior study on the modification of the existing m-out-of-n (moon) bootstrap method for heavy-tailed distributions in which modified m-out-of-n (mmoon) was proposed as an alternative method to the existing moon technique. In this study, both moon and mmoon techniques were applied to two real income datasets which followed Lognormal and Pareto distributions respectively with finite variances. The performances of these two techniques were compared using Standard Error (SE) and Root Mean Square Error (RMSE). The findings showed that mmoon outperformed moon bootstrap in terms of smaller SEs and RMSEs for all the sample sizes considered in the two datasets.Keywords: Bootstrap, income data, lognormal distribution, Pareto distribution
Procedia PDF Downloads 18616709 Climate Change in Awash River Basin of Ethiopia: A Projection Study Using Global and Regional Climate Model Simulations
Authors: Mahtsente Tadese, Lalit Kumar, Richard Koech
Abstract:
The aim of this study was to project and analyze climate change in the Awash River Basin (ARB) using bias-corrected Global and Regional Climate Model simulations. The analysis included a baseline period from 1986-2005 and two future scenarios (the 2050s and 2070s) under two representative concentration pathways (RCP4.5 and RCP8.5). Bias correction methods were evaluated using graphical and statistical methods. Following the evaluation of bias correction methods, the Distribution Mapping (DM) and Power Transformation (PT) were used for temperature and precipitation projection, respectively. The 2050s and 2070s RCP4 simulations showed an increase in precipitation during half of the months with 32 and 10%, respectively. Moreover, the 2050s and 2070s RCP8.5 simulation indicated a decrease in precipitation with 18 and 26%, respectively. The 2050s and 2070s RCP8.5 simulation indicated a significant decrease in precipitation in four of the months (February/March to May) with the highest decreasing rate of 34.7%. The 2050s and 2070s RCP4.5 simulation showed an increase of 0.48-2.6 °C in maximum temperature. In the case of RCP8.5, the increase rate reached 3.4 °C and 4.1 °C in the 2050s and 2070s, respectively. The changes in precipitation and temperature might worsen the water stress, flood, and drought in ARB. Moreover, the critical focus should be given to mitigation strategies and management options to reduce the negative impact. The findings of this study provide valuable information on future precipitation and temperature change in ARB, which will help in the planning and design of sustainable mitigation approaches in the basin.Keywords: variability, climate change, Awash River Basin, precipitation
Procedia PDF Downloads 17416708 Organizational Ideologies and Their Embeddedness in Fashion Show Productions in Shanghai and London Fashion Week: International-Based-Chinese Independent Designers' Participatory Behaviors in Different Fashion Cities
Authors: Zhe Wang
Abstract:
The fashion week, as a critical international fashion event in shaping world fashion cities, is one of the most significant world events that serves as the core medium for designers to stage new collections. However, its role in bringing about and shaping design ideologies of major fashion cities have long been neglected from a fashion ecosystem perspective. With the expanding scale of international fashion weeks in terms of culture and commerce, the organizational structures of these fashion weeks are becoming more complex. In the emerging fashion city, typified by Shanghai, a newly-formed 'hodgepodge' transforming the current global fashion ecosystem. A city’s legitimate fashion institutions, typically the organizers of international fashion weeks, have cultivated various cultural characteristics via rules and regulations pertaining to international fashion weeks. Under these circumstances, designers’ participatory behaviors, specifically show design and production, are influenced by the cultural ideologies of official organizers and institutions. This research compares international based Chinese (IBC) independent designers’ participatory behavior in London and Shanghai Fashion Weeks: specifically, the way designers present their clothing and show production. both of which are found to be profoundly influenced by cultural and design ideologies of fashion weeks. They are, to a large degree, manipulated by domestic institutions and organizers. Shanghai fashion week has given rise to a multiple, mass-ended entertainment carnival design and cultural ideology in Shanghai, thereby impacting the explicit cultural codes or intangible rules that IBC designers must adhere to when designing and producing fashion shows. Therefore, influenced by various cultural characteristics in the two cities, IBC designers’ show design and productions, in turn, play an increasingly vital role in shaping the design characteristic of an international fashion week. Through researching the organizational systems and design preferences of organizers of London and Shanghai fashion weeks, this paper demonstrates the embeddedness of design systems in the forming of design ideologies under various cultural and institutional contexts. The core methodology utilized in this research is ethnography. As a crucial part of a Ph.D. project on innovations in fashion shows under a cross-cultural context run by Edinburgh College of Art, School of Design, the fashion week’s organizational culture in various cultural contexts is investigated in London and Shanghai for approximately six months respectively. Two IBC designers, Angel Chen and Xuzhi Chen were followed during their participation of London and Shanghai Fashion Weeks from September 2016 to June 2017, during which two consecutive seasons were researched in order to verify the consistency of design ideologies’ associations with organizational system and culture.Keywords: institutional ideologies, international fashion weeks, IBC independent designers; fashion show
Procedia PDF Downloads 11816707 Assessment of Patient Cooperation and Compliance in Three Stages of Orthodontic Treatment in Adult Patients: A Cross-Sectional Study
Authors: Hafsa Qabool, Rashna Sukhia, Mubassar Fida
Abstract:
Introduction: Success of orthodontic mechanotherapy is highly dependent upon patient cooperation and compliance throughout the duration of treatment. This study was conducted to assess the cooperation and compliance of adult orthodontic patients during the leveling and alignment, space closure/molar correction, and finishing stages of tooth movement. Materials and Methods: Patient cooperation and compliance among three stages of orthodontic treatment were assessed using the Orthodontic Patient Cooperation Scale (OPCS) and Clinical Compliance Evaluation (CCE) form. A sample size of 38 was calculated for each stage of treatment; therefore, 114 subjects were included in the study. Shapiro-Wilk test identified that the data were normally distributed. One way ANOVA was used to evaluate the percentage cooperation and compliance among the three stages. Pair-wise comparisons between the three stages were performed using Post-hoc Tukey. Results: Statistically significant difference was seen for scores of patient compliance using CCE (p = 0.01); however, the results of the OPCS showed a non-significant difference for patient cooperation (p = 0.16) among the three stages of treatment. Post-hoc analysis showed significant differences (p = 0.01) in patient cooperation and compliance between space closure and the finishing stage. Highly significant (p < 0.001) decline in oral hygiene was found with the progression of orthodontic treatment. Conclusions: Improvement in the cooperation and compliance levels for adult orthodontic patients was observed during space closure & molar correction stage, which then showed a decline as treatment progressed. Oral hygiene was progressively compromised as orthodontic treatment progressed.Keywords: patient compliance, adult orthodontics, orthodontic motivation, orthodontic patient adherence
Procedia PDF Downloads 16816706 Numerical Simulation of the Flowing of Ice Slurry in Seawater Pipe of Polar Ships
Authors: Li Xu, Huanbao Jiang, Zhenfei Huang, Lailai Zhang
Abstract:
In recent years, as global warming, the sea-ice extent of North Arctic undergoes an evident decrease and Arctic channel has attracted the attention of shipping industry. Ice crystals existing in the seawater of Arctic channel which enter the seawater system of the ship with the seawater were found blocking the seawater pipe. The appearance of cooler paralysis, auxiliary machine error and even ship power system paralysis may be happened if seriously. In order to reduce the effect of high temperature in auxiliary equipment, seawater system will use external ice-water to participate in the cooling cycle and achieve the state of its flow. The distribution of ice crystals in seawater pipe can be achieved. As the ice slurry system is solid liquid two-phase system, the flow process of ice-water mixture is very complex and diverse. In this paper, the flow process in seawater pipe of ice slurry is simulated with fluid dynamics simulation software based on k-ε turbulence model. As the ice packing fraction is a key factor effecting the distribution of ice crystals, the influence of ice packing fraction on the flowing process of ice slurry is analyzed. In this work, the simulation results show that as the ice packing fraction is relatively large, the distribution of ice crystals is uneven in the flowing process of the seawater which has such disadvantage as increase the possibility of blocking, that will provide scientific forecasting methods for the forming of ice block in seawater piping system. It has important significance for the reliability of the operating of polar ships in the future.Keywords: ice slurry, seawater pipe, ice packing fraction, numerical simulation
Procedia PDF Downloads 367