Search results for: Fuzzy Logic estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2978

Search results for: Fuzzy Logic estimation

1988 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates

Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau

Abstract:

There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.

Keywords: duration estimates, long durations, passage of time judgements, task demands

Procedia PDF Downloads 132
1987 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 88
1986 Poster : Incident Signals Estimation Based on a Modified MCA Learning Algorithm

Authors: Rashid Ahmed , John N. Avaritsiotis

Abstract:

Many signal subspace-based approaches have already been proposed for determining the fixed Direction of Arrival (DOA) of plane waves impinging on an array of sensors. Two procedures for DOA estimation based neural networks are presented. First, Principal Component Analysis (PCA) is employed to extract the maximum eigenvalue and eigenvector from signal subspace to estimate DOA. Second, minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix. In this paper, we will modify a Minor Component Analysis (MCA(R)) learning algorithm to enhance the convergence, where a convergence is essential for MCA algorithm towards practical applications. The learning rate parameter is also presented, which ensures fast convergence of the algorithm, because it has direct effect on the convergence of the weight vector and the error level is affected by this value. MCA is performed to determine the estimated DOA. Preliminary results will be furnished to illustrate the convergences results achieved.

Keywords: Direction of Arrival, neural networks, Principle Component Analysis, Minor Component Analysis

Procedia PDF Downloads 452
1985 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems

Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang

Abstract:

In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.

Keywords: fault detection, linear parameter varying, model predictive control, set theory

Procedia PDF Downloads 255
1984 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy

Authors: Chhabi Nigam, S. Ramakrishnan

Abstract:

This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.

Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR

Procedia PDF Downloads 218
1983 Fuzzy Set Qualitative Comparative Analysis in Business Models' Study

Authors: K. Debkowska

Abstract:

The aim of this article is presenting the possibilities of using Fuzzy Set Qualitative Comparative Analysis (fsQCA) in researches concerning business models of enterprises. FsQCA is a bridge between quantitative and qualitative researches. It's potential can be used in analysis and evaluation of business models. The article presents the results of a study conducted on the basis of enterprises belonging to different sectors: transport and logistics, industry, building construction, and trade. The enterprises have been researched taking into account the components of business models and the financial condition of companies. Business models are areas of complex and heterogeneous nature. The use of fsQCA has enabled to answer the following question: which components of a business model and in which configuration influence better financial condition of enterprises. The analysis has been performed separately for particular sectors. This enabled to compare the combinations of business models' components which actively influence the financial condition of enterprises in analyzed sectors. The following components of business models were analyzed for the purposes of the study: Key Partners, Key Activities, Key Resources, Value Proposition, Channels, Cost Structure, Revenue Streams, Customer Segment and Customer Relationships. These components of the study constituted the variables shaping the financial results of enterprises. The results of the study lead us to believe that fsQCA can help in analyzing and evaluating a business model, which is important in terms of making a business decision about the business model used or its change. In addition, results obtained by fsQCA can be applied by all stakeholders connected with the company.

Keywords: business models, components of business models, data analysis, fsQCA

Procedia PDF Downloads 173
1982 Chemometric Estimation of Phytochemicals Affecting the Antioxidant Potential of Lettuce

Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Aleksandra Tepic-Horecki, Zdravko Sumic

Abstract:

In this paper, the influence of six different phytochemical content (phenols, carotenoids, chlorophyll a, chlorophyll b, chlorophyll a + b and vitamin C) on antioxidant potential of Murai and Levistro lettuce varieties was evaluated. Variable selection was made by generalized pair correlation method (GPCM) as a novel ranking method. This method is used for the discrimination between two variables that almost equal correlate to a dependent variable. Fisher’s conditional exact and McNemar’s test were carried out. Established multiple linear (MLR) models were statistically evaluated. As the best phytochemicals for the antioxidant potential prediction, chlorophyll a, chlorophyll a + b and total carotenoids content stand out. This was confirmed through both GPCM and MLR, predictive ability of obtained MLR can be used for antioxidant potential estimation for similar lettuce samples. This article is based upon work from the project of the Provincial Secretariat for Science and Technological Development of Vojvodina (No. 114-451-347/2015-02).

Keywords: antioxidant activity, generalized pair correlation method, lettuce, regression analysis

Procedia PDF Downloads 389
1981 Estimation of Fouling in a Cross-Flow Heat Exchanger Using Artificial Neural Network Approach

Authors: Rania Jradi, Christophe Marvillet, Mohamed Razak Jeday

Abstract:

One of the most frequently encountered problems in industrial heat exchangers is fouling, which degrades the thermal and hydraulic performances of these types of equipment, leading thus to failure if undetected. And it occurs due to the accumulation of undesired material on the heat transfer surface. So, it is necessary to know about the heat exchanger fouling dynamics to plan mitigation strategies, ensuring a sustainable and safe operation. This paper proposes an Artificial Neural Network (ANN) approach to estimate the fouling resistance in a cross-flow heat exchanger by the collection of the operating data of the phosphoric acid concentration loop. The operating data of 361 was used to validate the proposed model. The ANN attains AARD= 0.048%, MSE= 1.811x10⁻¹¹, RMSE= 4.256x 10⁻⁶ and r²=99.5 % of accuracy which confirms that it is a credible and valuable approach for industrialists and technologists who are faced with the drawbacks of fouling in heat exchangers.

Keywords: cross-flow heat exchanger, fouling, estimation, phosphoric acid concentration loop, artificial neural network approach

Procedia PDF Downloads 199
1980 BER of the Leaky Feeder under Rayleigh Fading Multichannel Reception with Imperfect Phase Estimation

Authors: Hasan Farahneh, Xavier Fernando

Abstract:

Leaky Feeder (LF) has been a proven technology for many decades and its promises broadband wireless access in short range but being overlooked until now. The LF is a natural MIMO transceiver ideal for micro and pico cells. In this work, the LF is considered as a linear antenna array MultiInput-Single-Output (MISO) and derive the average bit error rate (BER) in Rayleigh fading channel considering ideal and independent paths (iid) which consider there is no correlation and mutual coupling between transmit antennas (slots) or receiver antenna considering QPSK modulation with imperfect phase estimation. We consider maximal ratio transmission (MRT) at the transmit end and maximal ratio combining (MRC) at the receiving end. Analytical expressions are derived for the BER with radiating cable transmitters. The effects of slot spacing and carrier frequency on the BER are also studied. Numerical evaluations show the radiating cable transmitter offer much lower BER than a single antenna transmitter with same SNR.

Keywords: leaky feeder, BER, QPSK, rayleigh fading, channel gain, phase mismatch

Procedia PDF Downloads 384
1979 Process for Analyzing Information Security Risks Associated with the Incorporation of Online Dispute Resolution Systems in the Context of Conciliation in Colombia

Authors: Jefferson Camacho Mejia, Jenny Paola Forero Pachon, Luis Carlos Gomez Florez

Abstract:

The innumerable possibilities offered by the use of Information Technology (IT) in the development of different socio-economic activities has made a change in the social paradigm and the emergence of the so-called information and knowledge society. The Colombian government, aware of this reality, has been promoting the use of IT as part of the E-government strategy adopted in the country. However, it is well known that the use of IT implies the existence of certain threats that put the security of information in the digital environment at risk. One of the priorities of the Colombian government is to improve access to alternative justice through IT, in particular, access to Alternative Dispute Resolution (ADR): conciliation, arbitration and friendly composition; by means of which it is sought that the citizens directly resolve their differences. To this end, a trend has been identified in the use of Online Dispute Resolution (ODR) systems, which extend the benefits of ADR to the digital environment through the use of IT. This article presents a process for the analysis of information security risks associated with the incorporation of ODR systems in the context of conciliation in Colombia, based on four fundamental stages identified in the literature: (I) Identification of assets, (II) Identification of threats and vulnerabilities (III) Estimation of the impact and 4) Estimation of risk levels. The methodological design adopted for this research was the grounded theory, since it involves interactions that are applied to a specific context and from the perspective of diverse participants. As a result of this investigation, the activities to be followed are defined to carry out an analysis of information security risks, in the context of the conciliation in Colombia supported by ODR systems, thus contributing to the estimation of the risks to make possible its subsequent treatment.

Keywords: alternative dispute resolution, conciliation, information security, online dispute resolution systems, process, risk analysis

Procedia PDF Downloads 241
1978 DG Allocation to Reduce Production Cost by Reducing Losses in Radial Distribution Systems Using Fuzzy

Authors: G. V. Siva Krishna Rao, B. Srinivasa Rao

Abstract:

Electrical energy is vital in every aspect of day-to-day life. Keen interest is taken on all possible sources of energy from which it can be generated and this led to the encouragement of generating electrical power using renewable energy resources such as solar, tidal waves and wind energy. Due to the increasing interest on renewable sources in recent times, the studies on integration of distributed generation to the power grid have rapidly increased. Distributed Generation (DG) is a promising solution to many power system problems such as voltage regulation, power loss and reduction in operational cost, etc. To reduce production cost, it is important to minimize the losses by determining the location and size of local generators to be placed in the radial distribution systems. In this paper, reduction of production cost by optimal size of DG unit operated at optimal power factor is dealt. The optimal size of the DG unit is calculated analytically using approximate reasoning suitable nodes and DG placement to minimize production cost with minimum loss is determined by fuzzy technique. Total Cost of Power generation is compared with and without DG unit for 1 year duration. The suggested method is programmed under MATLAB software and is tested on IEEE 33 bus system and the results are presented.

Keywords: distributed generation, operational cost, exact loss formula, optimum size, optimum location

Procedia PDF Downloads 485
1977 A Game of Information in Defense/Attack Strategies: Case of Poisson Attacks

Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez

Abstract:

In this paper, we briefly introduce the concept of Poisson attacks in the case of defense/attack strategies where attacks are assumed to be continuous. We suggest a game model in which the attacker will combine both criteria of a sufficient confidence level of a successful attack and a reasonably small size of the estimation error in order to launch an attack. Here, estimation error arises from assessing the system failure upon attack using aggregate data at the system level. The corresponding error is referred to as aggregation error. On the other hand, the defender will attempt to deter attack by making one or both criteria inapplicable. The defender will build his/her strategy by both strengthening the targeted system and increasing the size of error. We will formulate the defender problem based on appropriate optimization models. The attacker will opt for a Bayesian updating in assessing the impact on the improvement made by the defender. Then, the attacker will evaluate the feasibility of the attack before making the decision of whether or not to launch it. We will provide illustrations to better explain the process.

Keywords: attacker, defender, game theory, information

Procedia PDF Downloads 469
1976 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters

Authors: Jyoti Sahu, Vinay A. Juvekar

Abstract:

Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.

Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature

Procedia PDF Downloads 393
1975 Performance Evaluation of a Minimum Mean Square Error-Based Physical Sidelink Share Channel Receiver under Fading Channel

Authors: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis

Abstract:

Cellular Vehicle to Everything (C-V2X) is considered a promising solution for future autonomous driving. From Release 16 to Release 17, the Third Generation Partnership Project (3GPP) has introduced the definitions and services for 5G New Radio (NR) V2X. Experience from previous generations has shown that establishing a simulator for C-V2X communications is an essential preliminary step to achieve reliable and stable communication links. This paper proposes a complete framework of a link-level simulator based on the 3GPP specifications for the Physical Sidelink Share Channel (PSSCH) of the 5G NR Physical Layer (PHY). In this framework, several algorithms in the receiver part, i.e., sliding window in channel estimation and Minimum Mean Square Error (MMSE)-based equalization, are developed. Finally, the performance of the developed PSSCH receiver is validated through extensive simulations under different assumptions.

Keywords: C-V2X, channel estimation, link-level simulator, sidelink, 3GPP

Procedia PDF Downloads 133
1974 Implementation of Integrated Multi-Channel Analysis of Surface Waves and Waveform Inversion Techniques for Seismic Hazard Estimation with Emphasis on Associated Uncertainty: A Case Study at Zafarana Wind Turbine Towers Farm, Egypt

Authors: Abd El-Aziz Khairy Abd El-Aal, Yuji Yagi, Heba Kamal

Abstract:

In this study, an integrated multi-channel analysis of Surface Waves (MASW) technique is applied to explore the geotechnical parameters of subsurface layers at the Zafarana wind farm. Moreover, a seismic hazard procedure based on the extended deterministic technique is used to estimate the seismic hazard load for the investigated area. The study area includes many active fault systems along the Gulf of Suez that cause many moderate and large earthquakes. Overall, the seismic activity of the area has recently become better understood following the use of new waveform inversion methods and software to develop accurate focal mechanism solutions for recent recorded earthquakes around the studied area. These earthquakes resulted in major stress-drops in the Eastern desert and the Gulf of Suez area. These findings have helped to reshape the understanding of the seismotectonic environment of the Gulf of Suez area, which is a perplexing tectonic domain. Based on the collected new information and data, this study uses an extended deterministic approach to re-examine the seismic hazard for the Gulf of Suez region, particularly the wind turbine towers at Zafarana Wind Farm and its vicinity. Alternate seismic source and magnitude-frequency relationships were combined with various indigenous attenuation relationships, adapted within a logic tree formulation, to quantify and project the regional exposure on a set of hazard maps. We select two desired exceedance probabilities (10 and 20%) that any of the applied scenarios may exceed the largest median ground acceleration. The ground motion was calculated at 50th, 84th percentile levels.

Keywords: MASW, seismic hazard, wind turbine towers, Zafarana wind farm

Procedia PDF Downloads 404
1973 R Software for Parameter Estimation of Spatio-Temporal Model

Authors: Budi Nurani Ruchjana, Atje Setiawan Abdullah, I. Gede Nyoman Mindra Jaya, Eddy Hermawan

Abstract:

In this paper, we propose the application package to estimate parameters of spatiotemporal model based on the multivariate time series analysis using the R open-source software. We build packages mainly to estimate the parameters of the Generalized Space Time Autoregressive (GSTAR) model. GSTAR is a combination of time series and spatial models that have parameters vary per location. We use the method of Ordinary Least Squares (OLS) and use the Mean Average Percentage Error (MAPE) to fit the model to spatiotemporal real phenomenon. For case study, we use oil production data from volcanic layer at Jatibarang Indonesia or climate data such as rainfall in Indonesia. Software R is very user-friendly and it is making calculation easier, processing the data is accurate and faster. Limitations R script for the estimation of model parameters spatiotemporal GSTAR built is still limited to a stationary time series model. Therefore, the R program under windows can be developed either for theoretical studies and application.

Keywords: GSTAR Model, MAPE, OLS method, oil production, R software

Procedia PDF Downloads 243
1972 Hybrid Deep Learning and FAST-BRISK 3D Object Detection Technique for Bin-Picking Application

Authors: Thanakrit Taweesoontorn, Sarucha Yanyong, Poom Konghuayrob

Abstract:

Robotic arms have gained popularity in various industries due to their accuracy and efficiency. This research proposes a method for bin-picking tasks using the Cobot, combining the YOLOv5 CNNs model for object detection and pose estimation with traditional feature detection (FAST), feature description (BRISK), and matching algorithms. By integrating these algorithms and utilizing a small-scale depth sensor camera for capturing depth and color images, the system achieves real-time object detection and accurate pose estimation, enabling the robotic arm to pick objects correctly in both position and orientation. Furthermore, the proposed method is implemented within the ROS framework to provide a seamless platform for robotic control and integration. This integration of robotics, cameras, and AI technology contributes to the development of industrial robotics, opening up new possibilities for automating challenging tasks and improving overall operational efficiency.

Keywords: robotic vision, image processing, applications of robotics, artificial intelligent

Procedia PDF Downloads 97
1971 Estimation of the Temperatures in an Asynchronous Machine Using Extended Kalman Filter

Authors: Yi Huang, Clemens Guehmann

Abstract:

In order to monitor the thermal behavior of an asynchronous machine with squirrel cage rotor, a 9th-order extended Kalman filter (EKF) algorithm is implemented to estimate the temperatures of the stator windings, the rotor cage and the stator core. The state-space equations of EKF are established based on the electrical, mechanical and the simplified thermal models of an asynchronous machine. The asynchronous machine with simplified thermal model in Dymola is compiled as DymolaBlock, a physical model in MATLAB/Simulink. The coolant air temperature, three-phase voltages and currents are exported from the physical model and are processed by EKF estimator as inputs. Compared to the temperatures exported from the physical model of the machine, three parts of temperatures can be estimated quite accurately by the EKF estimator. The online EKF estimator is independent from the machine control algorithm and can work under any speed and load condition if the stator current is nonzero current system.

Keywords: asynchronous machine, extended Kalman filter, resistance, simulation, temperature estimation, thermal model

Procedia PDF Downloads 285
1970 Comparison Approach for Wind Resource Assessment to Determine Most Precise Approach

Authors: Tasir Khan, Ishfaq Ahmad, Yejuan Wang, Muhammad Salam

Abstract:

Distribution models of the wind speed data are essential to assess the potential wind speed energy because it decreases the uncertainty to estimate wind energy output. Therefore, before performing a detailed potential energy analysis, the precise distribution model for data relating to wind speed must be found. In this research, material from numerous criteria goodness-of-fits, such as Kolmogorov Simonov, Anderson Darling statistics, Chi-Square, root mean square error (RMSE), AIC and BIC were combined finally to determine the wind speed of the best-fitted distribution. The suggested method collectively makes each criterion. This method was useful in a circumstance to fitting 14 distribution models statistically with the data of wind speed together at four sites in Pakistan. The consequences show that this method provides the best source for selecting the most suitable wind speed statistical distribution. Also, the graphical representation is consistent with the analytical results. This research presents three estimation methods that can be used to calculate the different distributions used to estimate the wind. In the suggested MLM, MOM, and MLE the third-order moment used in the wind energy formula is a key function because it makes an important contribution to the precise estimate of wind energy. In order to prove the presence of the suggested MOM, it was compared with well-known estimation methods, such as the method of linear moment, and maximum likelihood estimate. In the relative analysis, given to several goodness-of-fit, the presentation of the considered techniques is estimated on the actual wind speed evaluated in different time periods. The results obtained show that MOM certainly provides a more precise estimation than other familiar approaches in terms of estimating wind energy based on the fourteen distributions. Therefore, MOM can be used as a better technique for assessing wind energy.

Keywords: wind-speed modeling, goodness of fit, maximum likelihood method, linear moment

Procedia PDF Downloads 85
1969 Estimation of Uncertainty of Thermal Conductivity Measurement with Single Laboratory Validation Approach

Authors: Saowaluck Ukrisdawithid

Abstract:

The thermal conductivity of thermal insulation materials are measured by Heat Flow Meter (HFM) apparatus. The components of uncertainty are complex and difficult on routine measurement by modelling approach. In this study, uncertainty of thermal conductivity measurement was estimated by single laboratory validation approach. The within-laboratory reproducibility was 1.1%. The standard uncertainty of method and laboratory bias by using SRM1453 expanded polystyrene board was dominant at 1.4%. However, it was assessed that there was no significant bias. For sample measurement, the sources of uncertainty were repeatability, density of sample and thermal conductivity resolution of HFM. From this approach to sample measurements, the combined uncertainty was calculated. In summary, the thermal conductivity of sample, polystyrene foam, was reported as 0.03367 W/m·K ± 3.5% (k = 2) at mean temperature 23.5 °C. The single laboratory validation approach is simple key of routine testing laboratory for estimation uncertainty of thermal conductivity measurement by using HFM, according to ISO/IEC 17025-2017 requirements. These are meaningful for laboratory competent improvement, quality control on products, and conformity assessment.

Keywords: single laboratory validation approach, within-laboratory reproducibility, method and laboratory bias, certified reference material

Procedia PDF Downloads 155
1968 Statistical and Analytical Comparison of GIS Overlay Modelings: An Appraisal on Groundwater Prospecting in Precambrian Metamorphics

Authors: Tapas Acharya, Monalisa Mitra

Abstract:

Overlay modeling is the most widely used conventional analysis for spatial decision support system. Overlay modeling requires a set of themes with different weightage computed in varied manners, which gives a resultant input for further integrated analysis. In spite of the popularity and most widely used technique; it gives inconsistent and erroneous results for similar inputs while processed in various GIS overlay techniques. This study is an attempt to compare and analyse the differences in the outputs of different overlay methods using GIS platform with same set of themes of the Precambrian metamorphic to obtain groundwater prospecting in Precambrian metamorphic rocks. The objective of the study is to emphasize the most suitable overlay method for groundwater prospecting in older Precambrian metamorphics. Seven input thematic layers like slope, Digital Elevation Model (DEM), soil thickness, lineament intersection density, average groundwater table fluctuation, stream density and lithology have been used in the spatial overlay models of fuzzy overlay, weighted overlay and weighted sum overlay methods to yield the suitable groundwater prospective zones. Spatial concurrence analysis with high yielding wells of the study area and the statistical comparative studies among the outputs of various overlay models using RStudio reveal that the Weighted Overlay model is the most efficient GIS overlay model to delineate the groundwater prospecting zones in the Precambrian metamorphic rocks.

Keywords: fuzzy overlay, GIS overlay model, groundwater prospecting, Precambrian metamorphics, weighted overlay, weighted sum overlay

Procedia PDF Downloads 128
1967 The Impact of Bim Technology on the Whole Process Cost Management of Civil Engineering Projects in Kenya

Authors: Nsimbe Allan

Abstract:

The study examines the impact of Building Information Modeling (BIM) on the cost management of engineering projects, focusing specifically on the Mombasa Port Area Development Project. The objective of this research venture is to determine the mechanisms through which Building Information Modeling (BIM) facilitates stakeholder collaboration, reduces construction-related expenses, and enhances the precision of cost estimation. Furthermore, the study investigates barriers to execution, assesses the impact on the project's transparency, and suggests approaches to maximize resource utilization. The study, selected for its practical significance and intricate nature, conducted a Systematic Literature Review (SLR) using credible databases, including ScienceDirect and IEEE Xplore. To constitute the diverse sample, 69 individuals, including project managers, cost estimators, and BIM administrators, were selected via stratified random sampling. The data were obtained using a mixed-methods approach, which prioritized ethical considerations. SPSS and Microsoft Excel were applied to the analysis. The research emphasizes the crucial role that project managers, architects, and engineers play in the decision-making process (47% of respondents). Furthermore, a significant improvement in cost estimation accuracy was reported by 70% of the participants. It was found that the implementation of BIM resulted in enhanced project visibility, which in turn optimized resource allocation and facilitated the process of budgeting. In brief, the study highlights the positive impacts of Building Information Modeling (BIM) on collaborative decision-making and cost estimation, addresses challenges related to implementation, and provides solutions for the efficient assimilation and understanding of BIM principles.

Keywords: cost management, resource utilization, stakeholder collaboration, project transparency

Procedia PDF Downloads 69
1966 Estimation of Gaseous Pollutants at Kalyanpur, Dhaka City

Authors: Farhana Tarannum

Abstract:

Ambient (outdoor) air pollution is now recognized as an important problem, both nationally and worldwide. The concentrations of gaseous pollutants (SOx, NOx, CO and O3) have been determined from samples collected at Kallyanpur along Shamoli corridor in Dhaka city. Pollutants were determined in a sample collected at ground level and a roof of a 7-storied building. These pollutants are emitted largely from stationary sources like fossil fuel fired power plants, industrial plants, and manufacturing facilities as well as mobile sources. The incomplete combustion of fuel, wood and the Sulphur containing fuel used in the vehicles are one of the main causes of CO and SOx respectively in our natural environment. When the temperature of combustion in high enough and some of that nitrogen reacts with oxygen in the air, various nitrogen oxides (NOx) are then formed. The VOCs react with NOx in the presence of sunlight to form O3. UV Visible spectrophotometric method has been used for the determination of SOx, NOx and O3. The sensor type device was used for the estimation of CO. It was found that the air pollutants (CO, SOx, NOx and O3) of a sample collected at the roof of a building were lower compared to the ground level; it indicated that ground level people are mostly affected by the gaseous pollutants.

Keywords: gaseous pollutants, UV-visible spectrophotometry, ambient air quality, Dhaka city

Procedia PDF Downloads 347
1965 Expert Based System Design for Integrated Waste Management

Authors: A. Buruzs, M. F. Hatwágner, A. Torma, L. T. Kóczy

Abstract:

Recently, an increasing number of researchers have been focusing on working out realistic solutions to sustainability problems. As sustainability issues gain higher importance for organisations, the management of such decisions becomes critical. Knowledge representation is a fundamental issue of complex knowledge based systems. Many types of sustainability problems would benefit from models based on experts’ knowledge. Cognitive maps have been used for analyzing and aiding decision making. A cognitive map can be made of almost any system or problem. A fuzzy cognitive map (FCM) can successfully represent knowledge and human experience, introducing concepts to represent the essential elements and the cause and effect relationships among the concepts to model the behavior of any system. Integrated waste management systems (IWMS) are complex systems that can be decomposed to non-related and related subsystems and elements, where many factors have to be taken into consideration that may be complementary, contradictory, and competitive; these factors influence each other and determine the overall decision process of the system. The goal of the present paper is to construct an efficient IWMS which considers various factors. The authors’ intention is to propose an expert based system design approach for implementing expert decision support in the area of IWMSs and introduces an appropriate methodology for the development and analysis of group FCM. A framework for such a methodology consisting of the development and application phases is presented.

Keywords: factors, fuzzy cognitive map, group decision, integrated waste management system

Procedia PDF Downloads 277
1964 New Estimation in Autoregressive Models with Exponential White Noise by Using Reversible Jump MCMC Algorithm

Authors: Suparman Suparman

Abstract:

A white noise in autoregressive (AR) model is often assumed to be normally distributed. In application, the white noise usually do not follows a normal distribution. This paper aims to estimate a parameter of AR model that has a exponential white noise. A Bayesian method is adopted. A prior distribution of the parameter of AR model is selected and then this prior distribution is combined with a likelihood function of data to get a posterior distribution. Based on this posterior distribution, a Bayesian estimator for the parameter of AR model is estimated. Because the order of AR model is considered a parameter, this Bayesian estimator cannot be explicitly calculated. To resolve this problem, a method of reversible jump Markov Chain Monte Carlo (MCMC) is adopted. A result is a estimation of the parameter AR model can be simultaneously calculated.

Keywords: autoregressive (AR) model, exponential white Noise, bayesian, reversible jump Markov Chain Monte Carlo (MCMC)

Procedia PDF Downloads 356
1963 Obtaining the Analytic Dependence for Estimating the Ore Mill Operation Modes

Authors: Baghdasaryan Marinka

Abstract:

The particular significance of comprehensive estimation of the increase in the operation efficiency of the mill motor electromechanical system, providing the main technological process for obtaining a metallic concentrate, as well as the technical state of the system are substantiated. The works carried out in the sphere of investigating, creating, and improving the operation modes of electric drive motors and ore-grinding mills have been studied. Analytic dependences for estimating the operation modes of the ore-grinding mills aimed at improving the ore-crashing process maintenance and technical service efficiencies have been obtained. The obtained analytic dependencies establish a link between the technological and power parameters of the electromechanical system, and allow to estimate the state of the system and reveal the controlled parameters required for the efficient management in case of changing the technological parameters. It has been substantiated that the changes in the technological factors affecting the consumption power of the drive motor do not cause an instability in the electromechanical system.

Keywords: electromechanical system, estimation, operation mode, productivity, technological process, the mill filling degree

Procedia PDF Downloads 272
1962 Designing Supplier Partnership Success Factors in the Coal Mining Industry

Authors: Ahmad Afif, Teuku Yuri M. Zagloel

Abstract:

Sustainable supply chain management is a new pattern that has emerged recently in industry and companies. The procurement process is one of the key factors for efficiency in supply chain management practices. Partnership is one of the procurement strategies for strategic items. The success factors of the partnership must be determined to avoid things that endanger the financial and operational status of the company. The current supplier partnership research focuses on the selection of general criteria and sustainable supplier selection. Currently, there is still limited research on the success factors of supplier partnerships that focus on strategic items in the coal mining industry. Meanwhile, the procurement of coal mining has its own characteristics, and there are regulations related to the procurement of goods. Therefore, this research was conducted to determine the categories of goods that are included in the strategic items and to design the success factors of supplier partnerships. The main factors studied are general, financial, production, reputation, synergies, and sustainable. The research was conducted using the Kraljic method to determine the categories of goods that are included in the strategic items. To design a supplier partnership success factor using the Hybrid Multi Criteria Decision Making method. Integrated Fuzzy AHP-Fuzzy TOPSIS is used to determine the weight of the success factors of supplier partnerships and to rank suppliers on the factors used.

Keywords: supplier, partnership, strategic item, success factors, and coal mining industry

Procedia PDF Downloads 131
1961 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 143
1960 Optimizing and Evaluating Performance Quality Control of the Production Process of Disposable Essentials Using Approach Vague Goal Programming

Authors: Hadi Gholizadeh, Ali Tajdin

Abstract:

To have effective production planning, it is necessary to control the quality of processes. This paper aims at improving the performance of the disposable essentials process using statistical quality control and goal programming in a vague environment. That is expressed uncertainty because there is always a measurement error in the real world. Therefore, in this study, the conditions are examined in a vague environment that is a distance-based environment. The disposable essentials process in Kach Company was studied. Statistical control tools were used to characterize the existing process for four factor responses including the average of disposable glasses’ weights, heights, crater diameters, and volumes. Goal programming was then utilized to find the combination of optimal factors setting in a vague environment which is measured to apply uncertainty of the initial information when some of the parameters of the models are vague; also, the fuzzy regression model is used to predict the responses of the four described factors. Optimization results show that the process capability index values for disposable glasses’ average of weights, heights, crater diameters and volumes were improved. Such increasing the quality of the products and reducing the waste, which will reduce the cost of the finished product, and ultimately will bring customer satisfaction, and this satisfaction, will mean increased sales.

Keywords: goal programming, quality control, vague environment, disposable glasses’ optimization, fuzzy regression

Procedia PDF Downloads 225
1959 Validation of the Linear Trend Estimation Technique for Prediction of Average Water and Sewerage Charge Rate Prices in the Czech Republic

Authors: Aneta Oblouková, Eva Vítková

Abstract:

The article deals with the issue of water and sewerage charge rate prices in the Czech Republic. The research is specifically focused on the analysis of the development of the average prices of water and sewerage charge rate in the Czech Republic in the years 1994-2021 and on the validation of the chosen methodology relevant for the prediction of the development of the average prices of water and sewerage charge rate in the Czech Republic. The research is based on data collection. The data for this research was obtained from the Czech Statistical Office. The aim of the paper is to validate the relevance of the mathematical linear trend estimate technique for the calculation of the predicted average prices of water and sewerage charge rates. The real values of the average prices of water and sewerage charge rates in the Czech Republic in the years 1994-2018 were obtained from the Czech Statistical Office and were converted into a mathematical equation. The same type of real data was obtained from the Czech Statistical Office for the years 2019-2021. Prediction of the average prices of water and sewerage charge rates in the Czech Republic in the years 2019-2021 were also calculated using a chosen method -a linear trend estimation technique. The values obtained from the Czech Statistical Office and the values calculated using the chosen methodology were subsequently compared. The research result is a validation of the chosen mathematical technique to be a suitable technique for this research.

Keywords: Czech Republic, linear trend estimation, price prediction, water and sewerage charge rate

Procedia PDF Downloads 120