Search results for: improvement of model accuracy and reliability
21776 Application of the Concept of Comonotonicity in Option Pricing
Authors: A. Chateauneuf, M. Mostoufi, D. Vyncke
Abstract:
Monte Carlo (MC) simulation is a technique that provides approximate solutions to a broad range of mathematical problems. A drawback of the method is its high computational cost, especially in a high-dimensional setting, such as estimating the Tail Value-at-Risk for large portfolios or pricing basket options and Asian options. For these types of problems, one can construct an upper bound in the convex order by replacing the copula by the comonotonic copula. This comonotonic upper bound can be computed very quickly, but it gives only a rough approximation. In this paper we introduce the Comonotonic Monte Carlo (CoMC) simulation, by using the comonotonic approximation as a control variate. The CoMC is of broad applicability and numerical results show a remarkable speed improvement. We illustrate the method for estimating Tail Value-at-Risk and pricing basket options and Asian options when the logreturns follow a Black-Scholes model or a variance gamma model.Keywords: control variate Monte Carlo, comonotonicity, option pricing, scientific computing
Procedia PDF Downloads 51521775 Numerical Modelling of Prestressed Geogrid Reinforced Soil System
Authors: Soukat Kumar Das
Abstract:
Rapid industrialization and increase in population has resulted in the scarcity of suitable ground conditions. It has driven the need of ground improvement by means of reinforcement with geosynthetics with the minimum possible settlement and with maximum possible safety. Prestressing the geosynthetics offers an economical yet safe method of gaining the goal. Commercially available software PLAXIS 3D has made the analysis of prestressed geosynthetics simpler with much practical simulations of the ground. Attempts have been made so far to analyse the effect of prestressing geosynthetics and the effect of interference of footing on Unreinforced (UR), Geogrid Reinforced (GR) and Prestressed Geogrid Reinforced (PGR) soil on the load bearing capacity and the settlement characteristics of prestressed geogrid reinforced soil using the numerical analysis by using the software PLAXIS 3D. The results of the numerical analysis have been validated and compared with those given in the referred paper. The results have been found to be in very good agreement with those of the actual field values with very small variation. The GR soil has been found to be improve the bearing pressure 240 % whereas the PGR soil improves it by almost 500 % for 1mm settlement. In fact, the PGR soil has enhanced the bearing pressure of the GR soil by almost 200 %. The settlement reduction has also been found to be very significant as for 100 kPa bearing pressure the settlement reduction of the PGR soil has been found to be about 88 % with respect to UR soil and it reduced to up to 67 % with respect to GR soil. The prestressing force has resulted in enhanced reinforcement mechanism, resulting in the increased bearing pressure. The deformation at the geogrid layer has been found to be 13.62 mm for GR soil whereas it decreased down to mere 3.5 mm for PGR soil which certainly ensures the effect of prestressing on the geogrid layer. The parameter Improvement factor or conventionally known as Bearing Capacity Ratio for different settlements and which depicts the improvement of the PGR with respect to UR and GR soil and the improvement of GR soil with respect to UR soil has been found to vary in the range of 1.66-2.40 in the present analysis for GR soil and was found to be vary between 3.58 and 5.12 for PGR soil with respect to UR soil. The effect of prestressing was also observed in case of two interfering square footings. The centre to centre distance between the two footings (SFD) was taken to be B, 1.5B, 2B, 2.5B and 3B where B is the width of the footing. It was found that for UR soil the improvement of the bearing pressure was up to 1.5B after which it remained almost same. But for GR soil the zone of influence rose up to 2B and for PGR it further went up to 2.5B. So the zone of interference for PGR soil has increased by 67% than Unreinforced (UR) soil and almost 25 % with respect to GR soil.Keywords: bearing, geogrid, prestressed, reinforced
Procedia PDF Downloads 40221774 Measurement and Prediction of Speed of Sound in Petroleum Fluids
Authors: S. Ghafoori, A. Al-Harbi, B. Al-Ajmi, A. Al-Shaalan, A. Al-Ajmi, M. Ali Juma
Abstract:
Seismic methods play an important role in the exploration for hydrocarbon reservoirs. However, the success of the method depends strongly on the reliability of the measured or predicted information regarding the velocity of sound in the media. Speed of sound has been used to study the thermodynamic properties of fluids. In this study, experimental data are reported and analyzed on the speed of sound in toluene and octane binary mixture. Three-factor three-level Box-Benhkam design is used to determine the significance of each factor, the synergetic effects of the factors, and the most significant factors on speed of sound. The developed mathematical model and statistical analysis provided a critical analysis of the simultaneous interactive effects of the independent variables indicating that the developed quadratic models were highly accurate and predictive.Keywords: experimental design, octane, speed of sound, toluene
Procedia PDF Downloads 27621773 Comparison of Extended Kalman Filter and Unscented Kalman Filter for Autonomous Orbit Determination of Lagrangian Navigation Constellation
Authors: Youtao Gao, Bingyu Jin, Tanran Zhao, Bo Xu
Abstract:
The history of satellite navigation can be dated back to the 1960s. From the U.S. Transit system and the Russian Tsikada system to the modern Global Positioning System (GPS) and the Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS), performance of satellite navigation has been greatly improved. Nowadays, the navigation accuracy and coverage of these existing systems have already fully fulfilled the requirement of near-Earth users, but these systems are still beyond the reach of deep space targets. Due to the renewed interest in space exploration, a novel high-precision satellite navigation system is becoming even more important. The increasing demand for such a deep space navigation system has contributed to the emergence of a variety of new constellation architectures, such as the Lunar Global Positioning System. Apart from a Walker constellation which is similar to the one adopted by GPS on Earth, a novel constellation architecture which consists of libration point satellites in the Earth-Moon system is also available to construct the lunar navigation system, which can be called accordingly, the libration point satellite navigation system. The concept of using Earth-Moon libration point satellites for lunar navigation was first proposed by Farquhar and then followed by many other researchers. Moreover, due to the special characteristics of Libration point orbits, an autonomous orbit determination technique, which is called ‘Liaison navigation’, can be adopted by the libration point satellites. Using only scalar satellite-to-satellite tracking data, both the orbits of the user and libration point satellites can be determined autonomously. In this way, the extensive Earth-based tracking measurement can be eliminated, and an autonomous satellite navigation system can be developed for future space exploration missions. The method of state estimate is an unnegligible factor which impacts on the orbit determination accuracy besides type of orbit, initial state accuracy and measurement accuracy. We apply the extended Kalman filter(EKF) and the unscented Kalman filter(UKF) to determinate the orbits of Lagrangian navigation satellites. The autonomous orbit determination errors are compared. The simulation results illustrate that UKF can improve the accuracy and z-axis convergence to some extent.Keywords: extended Kalman filter, autonomous orbit determination, unscented Kalman filter, navigation constellation
Procedia PDF Downloads 28521772 Chinese Acupuncture: A Potential Treatment for Autism Rat Model via Improving Synaptic Function
Authors: Sijie Chen, Xiaofang Chen, Juan Wang, Yingying Zhang, Yu Hong, Wanyu Zhuang, Xinxin Huang, Ping Ou, Longsheng Huang
Abstract:
Purpose: Autistic symptom improvement can be observed in children treated with acupuncture, but the mechanism is still being explored. In the present study, we used scalp acupuncture to treat autism rat model, and then their improvement in the abnormal behaviors and specific mechanisms behind were revealed by detecting animal behaviors, analyzing the RNA sequencing of the prefrontal cortex(PFC), and observing the ultrastructure of PFC neurons under the transmission electron microscope. Methods: On gestational day 12.5, Wistar rats were given valproic acid (VPA) by intraperitoneal injection, and their offspring were considered to be reliable rat models of autism. They were randomized to VPA or VPA-acupuncture group (n=8). Offspring of Wistar pregnant rats that were simultaneously injected with saline were randomly selected as the wild-type group (WT). VPA_acupuncture group rats received acupuncture intervention at 23 days of age for 4 weeks, and the other two groups followed without intervention. After the intervention, all experimental rats underwent behavioral tests. Immediately afterward, they were euthanized by cervical dislocation, and their prefrontal cortex was isolated for RNA sequencing and transmission electron microscopy. Results: The main results are as follows: 1. Animal behavioural tests: VPA group rats showed more anxiety-like behaviour and repetitive, stereotyped behaviour than WT group rats. While VPA group rats showed less spatial exploration ability, activity level, social interaction, and social novelty preference than WT group rats. It was gratifying to observe that acupuncture indeed improved these abnormal behaviors of autism rat model. 2. RNA-sequencing: The three groups of rats differed in the expression and enrichment pathways of multiple genes related to synaptic function, neural signal transduction, and circadian rhythm regulation. Our experiments indicated that acupuncture can alleviate the major symptoms of ASD by improving these neurological abnormalities. 3. Under the transmission electron microscopy, several lysosomes and mitochondrial structural abnormalities were observed in the prefrontal neurons of VPA group rats, which were manifested as atrophy of the mitochondrial membran, blurring or disappearance of the mitochondrial cristae, and even vacuolization. Moreover, the number of synapses and synaptic vesicles was relatively small. Conversely, the mitochondrial structure of rats in the WT group and VPA_acupuncture was normal, and the number of synapses and synaptic vesicles was relatively large. Conclusion: Acupuncture effectively improved the abnormal behaviors of autism rat model and the ultrastructure of the PFC neurons, which might worked by improving their abnormal synaptic function, synaptic plasticity and promoting neuronal signal transduction.Keywords: autism spectrum disorder, acupuncture, animal behavior, RNA sequencing, transmission electron microscope
Procedia PDF Downloads 4521771 Proposal Evaluation of Critical Success Factors (CSF) in Lean Manufacturing Projects
Authors: Guilherme Gorgulho, Carlos Roberto Camello Lima
Abstract:
Critical success factors (CSF) are used to design the practice of project management that can lead directly or indirectly to the success of the project. This management includes many elements that have to be synchronized in order to ensure the project on-time delivery, quality and the lowest possible cost. The objective of this work is to develop a proposal for evaluation of the FCS in lean manufacturing projects, and apply the evaluation in a pilot project. The results show that the use of continuous improvement programs in organizations brings benefits as the process cost reduction and improve productivity.Keywords: continuous improvement, critical success factors (csf), lean thinking, project management
Procedia PDF Downloads 36421770 Ranking Effective Factors on Strategic Planning to Achieve Organization Objectives in Fuzzy Multivariate Decision-Making Technique
Authors: Elahe Memari, Ahmad Aslizadeh, Ahmad Memari
Abstract:
Today strategic planning is counted as the most important duties of senior directors in each organization. Strategic planning allows the organizations to implement compiled strategies and reach higher competitive benefits than their competitors. The present research work tries to prepare and rank the strategies form effective factors on strategic planning in fulfillment of the State Road Management and Transportation Organization in order to indicate the role of organizational factors in efficiency of the process to organization managers. Connection between six main factors in fulfillment of State Road Management and Transportation Organization were studied here, including Improvement of Strategic Thinking in senior managers, improvement of the organization business process, rationalization of resources allocation in different parts of the organization, coordination and conformity of strategic plan with organization needs, adjustment of organization activities with environmental changes, reinforcement of organizational culture. All said factors approved by implemented tests and then ranked using fuzzy multivariate decision-making technique.Keywords: Fuzzy TOPSIS, improvement of organization business process, multivariate decision-making, strategic planning
Procedia PDF Downloads 42321769 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: customer value, Huff's Gravity Model, POS, Retailer
Procedia PDF Downloads 12321768 Assessing the Structure of Non-Verbal Semantic Knowledge: The Evaluation and First Results of the Hungarian Semantic Association Test
Authors: Alinka Molnár-Tóth, Tímea Tánczos, Regina Barna, Katalin Jakab, Péter Klivényi
Abstract:
Supported by neuroscientific findings, the so-called Hub-and-Spoke model of the human semantic system is based on two subcomponents of semantic cognition, namely the semantic control process and semantic representation. Our semantic knowledge is multimodal in nature, as the knowledge system stored in relation to a conception is extensive and broad, while different aspects of the conception may be relevant depending on the purpose. The motivation of our research is to develop a new diagnostic measurement procedure based on the preservation of semantic representation, which is appropriate to the specificities of the Hungarian language and which can be used to compare the non-verbal semantic knowledge of healthy and aphasic persons. The development of the test will broaden the Hungarian clinical diagnostic toolkit, which will allow for more specific therapy planning. The sample of healthy persons (n=480) was determined by the last census data for the representativeness of the sample. Based on the concept of the Pyramids and Palm Tree Test, and according to the characteristics of the Hungarian language, we have elaborated a test based on different types of semantic information, in which the subjects are presented with three pictures: they have to choose the one that best fits the target word above from the two lower options, based on the semantic relation defined. We have measured 5 types of semantic knowledge representations: associative relations, taxonomy, motional representations, concrete as well as abstract verbs. As the first step in our data analysis, we examined the normal distribution of our results, and since it was not normally distributed (p < 0.05), we used nonparametric statistics further into the analysis. Using descriptive statistics, we could determine the frequency of the correct and incorrect responses, and with this knowledge, we could later adjust and remove the items of questionable reliability. The reliability was tested using Cronbach’s α, and it can be safely said that all the results were in an acceptable range of reliability (α = 0.6-0.8). We then tested for the potential gender differences using the Mann Whitney-U test, however, we found no difference between the two (p < 0.05). Likewise, we didn’t see that the age had any effect on the results using one-way ANOVA (p < 0.05), however, the level of education did influence the results (p > 0.05). The relationships between the subtests were observed by the nonparametric Spearman’s rho correlation matrix, showing statistically significant correlation between the subtests (p > 0.05), signifying a linear relationship between the measured semantic functions. A margin of error of 5% was used in all cases. The research will contribute to the expansion of the clinical diagnostic toolkit and will be relevant for the individualised therapeutic design of treatment procedures. The use of a non-verbal test procedure will allow an early assessment of the most severe language conditions, which is a priority in the differential diagnosis. The measurement of reaction time is expected to advance prodrome research, as the tests can be easily conducted in the subclinical phase.Keywords: communication disorders, diagnostic toolkit, neurorehabilitation, semantic knowlegde
Procedia PDF Downloads 10321767 Efficient Deep Neural Networks for Real-Time Strawberry Freshness Monitoring: A Transfer Learning Approach
Authors: Mst. Tuhin Akter, Sharun Akter Khushbu, S. M. Shaqib
Abstract:
A real-time system architecture is highly effective for monitoring and detecting various damaged products or fruits that may deteriorate over time or become infected with diseases. Deep learning models have proven to be effective in building such architectures. However, building a deep learning model from scratch is a time-consuming and costly process. A more efficient solution is to utilize deep neural network (DNN) based transfer learning models in the real-time monitoring architecture. This study focuses on using a novel strawberry dataset to develop effective transfer learning models for the proposed real-time monitoring system architecture, specifically for evaluating and detecting strawberry freshness. Several state-of-the-art transfer learning models were employed, and the best performing model was found to be Xception, demonstrating higher performance across evaluation metrics such as accuracy, recall, precision, and F1-score.Keywords: strawberry freshness evaluation, deep neural network, transfer learning, image augmentation
Procedia PDF Downloads 9021766 New Fourth Order Explicit Group Method in the Solution of the Helmholtz Equation
Authors: Norhashidah Hj Mohd Ali, Teng Wai Ping
Abstract:
In this paper, the formulation of a new group explicit method with a fourth order accuracy is described in solving the two-dimensional Helmholtz equation. The formulation is based on the nine-point fourth-order compact finite difference approximation formula. The complexity analysis of the developed scheme is also presented. Several numerical experiments were conducted to test the feasibility of the developed scheme. Comparisons with other existing schemes will be reported and discussed. Preliminary results indicate that this method is a viable alternative high accuracy solver to the Helmholtz equation.Keywords: explicit group method, finite difference, Helmholtz equation, five-point formula, nine-point formula
Procedia PDF Downloads 50121765 Detection of Internal Mold Infection of Intact For Tomatoes by Non-Destructive, Transmittance VIS-NIR Spectroscopy
Authors: K. Petcharaporn, N. Prathengjit
Abstract:
The external characteristics of tomatoes, such as freshness, color and size are typically used in quality control processes for tomatoes sorting. However, the internal mold infection of intact tomato cannot be sorted based on a visible assessment and destructive method alone. In this study, a non-destructive technique was used to predict the internal mold infection of intact tomatoes by using transmittance visible and near infrared (VIS-NIR) spectroscopy. Spectra for 200 samples contained 100 samples for normal tomatoes and 100 samples for mold infected tomatoes were acquired in the wavelength range between 665-955 nm. This data was used in conjunction with partial least squares-discriminant analysis (PLS-DA) method to generate a classification model for tomato quality between groups of internal mold infection of intact tomato samples. For this task, the data was split into two groups, 140 samples were used for a training set and 60 samples were used for a test set. The spectra of both normal and internally mold infected tomatoes showed different features in the visible wavelength range. Combined spectral pretreatments of standard normal variate transformation (SNV) and smoothing (Savitzky-Golay) gave the optimal calibration model in training set, 85.0% (63 out of 71 for the normal samples and 56 out of 69 for the internal mold samples). The classification accuracy of the best model on the test set was 91.7% (29 out of 29 for the normal samples and 26 out of 31 for the internal mold tomato samples). The results from this experiment showed that transmittance VIS-NIR spectroscopy can be used as a non-destructive technique to predict the internal mold infection of intact tomatoes.Keywords: tomato, mold, quality, prediction, transmittance
Procedia PDF Downloads 51921764 Model Based Improvement of Ultrasound Assisted Transport of Cohesive Dry Powders
Authors: Paul Dunst, Ing. Tobias Hemsel, Ing. Habil. Walter Sextro
Abstract:
The use of fine powders with high cohesive and adhesive properties leads to challenges during transport, mixing and dosing in industrial processes, which have not been satisfactorily solved so far. Due to the increased contact forces at the transporting parts (e. g. pipe-wall and transport screws), conventional transport systems and also vibratory conveyors reach their limits. Often, flowability increasing additives that need to be removed again in later process steps are the only option to achieve wanted transport results. A rather new ultrasound-assisted powder transport system showed to overcome some of the issues by manipulating the effective friction between powder and transport pipe. Within this contribution, the transport mechanism will be introduced shortly, together with preliminary transport results. As the tangential force of the transport pipe and the powder is the main influencing factor within the transport process, a test stand for measuring tangential forces of a powder-wall contact in the presence of an ultrasonic vibration orthogonal to the contact plane was built. Measurements for a sample powder show that the effective tangential force can already be significantly reduced at very low ultrasonic amplitude. As a result of the measurements, an empirical model for the relationship of tangential force, contact parameters and ultrasonic excitation is presented. This model was used to adjust the driving parameters of the powder transport system, resulting in better performance.Keywords: powder transport, ultrasound, friction, friction manipulation, vibratory conveyor
Procedia PDF Downloads 15221763 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress
Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin
Abstract:
Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.Keywords: acceptance, coping strategies, stress, validation process
Procedia PDF Downloads 33921762 A New Mathematical Model of Human Olfaction
Authors: H. Namazi, H. T. N. Kuan
Abstract:
It is known that in humans, the adaptation to a given odor occurs within a quite short span of time (typically one minute) after the odor is presented to the brain. Different models of human olfaction have been developed by scientists but none of these models consider the diffusion phenomenon in olfaction. A novel microscopic model of the human olfaction is presented in this paper. We develop this model by incorporating the transient diffusivity. In fact, the mathematical model is written based on diffusion of the odorant within the mucus layer. By the use of the model developed in this paper, it becomes possible to provide quantification of the objective strength of odor.Keywords: diffusion, microscopic model, mucus layer, olfaction
Procedia PDF Downloads 50521761 An Output Oriented Super-Efficiency Model for Considering Time Lag Effect
Authors: Yanshuang Zhang, Byungho Jeong
Abstract:
There exists some time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in calculating efficiency of decision making units (DMU). Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. This problem can be resolved a super-efficiency model. However, a super efficiency model sometimes causes infeasibility problem. This paper suggests an output oriented super-efficiency model for efficiency evaluation under the consideration of time lag effect. A case example using a long term research project is given to compare the suggested model with the MpO modelKeywords: DEA, Super-efficiency, Time Lag, research activities
Procedia PDF Downloads 65921760 Optimization of Alkali Silicate Glass Heat Treatment for the Improvement of Thermal Expansion and Flexural Strength
Authors: Stephanie Guerra-Arias, Stephani Nevarez, Calvin Stewart, Rachel Grodsky, Denis Eichorst
Abstract:
The objective of this study is to describe the framework for optimizing the heat treatment of alkali silicate glasses, to enhance the performance of hermetic seals in extreme environments. When connectors are exposed to elevated temperatures, residual stresses develop due to the mismatch of thermal expansions between the glass, metal pin, and metal shell. Excessive thermal expansion mismatch compromises the reliability of hermetic seals. In this study, a series of heat treatment schedules will be performed on two commercial sealing glasses (one conventional sealing glass and one crystallizable sealing glass) using a design of experiments (DOE) approach. The coefficient of thermal expansion (CTE) will be measured pre- and post-heat treatment using thermomechanical analysis (TMA). Afterwards, the flexural strength of the specimen will be measured using a four-point bend fixture mounted in a static universal testing machine. The measured material properties will be statistically analyzed using MiniTab software to determine which factors of the heat treatment process have a strong correlation to the coefficient of thermal expansion and/or flexural strength. Finally, a heat-treatment will be designed and tested to ensure the optimal performance of the hermetic seals in connectors.Keywords: glass-ceramics, design of experiment, hermetic connectors, material characterization
Procedia PDF Downloads 15021759 Nonlinear Analysis of Shear Deformable Deep Beam Resting on Nonlinear Two-Parameter Random Soil
Authors: M. Seguini, D. Nedjar
Abstract:
In this paper, the nonlinear analysis of Timoshenko beam undergoing moderate large deflections and resting on nonlinear two-parameter random foundation is presented, taking into account the effects of shear deformation, beam’s properties variation and the spatial variability of soil characteristics. The finite element probabilistic analysis has been performed by using Timoshenko beam theory with the Von Kàrmàn nonlinear strain-displacement relationships combined to Vanmarcke theory and Monte Carlo simulations, which is implemented in a Matlab program. Numerical examples of the newly developed model is conducted to confirm the efficiency and accuracy of this later and the importance of accounting for the foundation second parameter (Winkler-Pasternak). Thus, the results obtained from the developed model are presented and compared with those available in the literature to examine how the consideration of the shear and spatial variability of soil’s characteristics affects the response of the system.Keywords: nonlinear analysis, soil-structure interaction, large deflection, Timoshenko beam, Euler-Bernoulli beam, Winkler foundation, Pasternak foundation, spatial variability
Procedia PDF Downloads 32321758 Lean Environmental Management Integration System (LEMIS) Framework Development
Authors: A. P. Puvanasvaran, Suresh A. L. Vasu, N. Norazlin
Abstract:
The Lean Environmental Management Integration System (LEMIS) framework development is integration between lean core element and ISO 14001. The curiosity on the relationship between continuous improvement and sustainability of lean implementation has influenced this study toward LEMIS. Characteristic of ISO 14001 standard clauses and core elements of lean principles are explored from past studies and literature reviews. Survey was carried out on ISO 14001 certified companies to examine continual improvement by implementing the ISO 14001 standard. The study found that there is a significant and positive relationship between Lean Principles: value, value stream, flow, pull and perfection with the ISO 14001 requirements. LEMIS is significant to support the continuous improvement and sustainability. The integration system can be implemented to any manufacturing company. It gives awareness on the importance on why organizations need to sustain its Environmental management system. At the meanwhile, the lean principle can be adapted in order to streamline daily activities of the company. Throughout the study, it had proven that there is no sacrifice or trade-off between lean principles with ISO 14001 requirements. The framework developed in the study can be further simplified in the future, especially the method of crossing each sub requirements of ISO 14001 standard with the core elements of Lean principles in this study.Keywords: LEMIS, ISO 14001, integration, framework
Procedia PDF Downloads 40621757 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data
Authors: Georgiana Onicescu, Yuqian Shen
Abstract:
Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection
Procedia PDF Downloads 14421756 A Study on Using Network Coding for Packet Transmissions in Wireless Sensor Networks
Authors: Rei-Heng Cheng, Wen-Pinn Fang
Abstract:
A wireless sensor network (WSN) is composed by a large number of sensors and one or a few base stations, where the sensor is responsible for detecting specific event information, which is sent back to the base station(s). However, how to save electricity consumption to extend the network lifetime is a problem that cannot be ignored in the wireless sensor networks. Since the sensor network is used to monitor a region or specific events, how the information can be reliably sent back to the base station is surly important. Network coding technique is often used to enhance the reliability of the network transmission. When a node needs to send out M data packets, it encodes these data with redundant data and sends out totally M + R packets. If the receiver can get any M packets out from these M + R packets, it can decode and get the original M data packets. To transmit redundant packets will certainly result in the excess energy consumption. This paper will explore relationship between the quality of wireless transmission and the number of redundant packets. Hopefully, each sensor can overhear the nearby transmissions, learn the wireless transmission quality around it, and dynamically determine the number of redundant packets used in network coding.Keywords: energy consumption, network coding, transmission reliability, wireless sensor networks
Procedia PDF Downloads 39121755 Optimum Dewatering Network Design Using Firefly Optimization Algorithm
Authors: S. M. Javad Davoodi, Mojtaba Shourian
Abstract:
Groundwater table close to the ground surface causes major problems in construction and mining operation. One of the methods to control groundwater in such cases is using pumping wells. These pumping wells remove excess water from the site project and lower the water table to a desirable value. Although the efficiency of this method is acceptable, it needs high expenses to apply. It means even small improvement in a design of pumping wells can lead to substantial cost savings. In order to minimize the total cost in the method of pumping wells, a simulation-optimization approach is applied. The proposed model integrates MODFLOW as the simulation model with Firefly as the optimization algorithm. In fact, MODFLOW computes the drawdown due to pumping in an aquifer and the Firefly algorithm defines the optimum value of design parameters which are numbers, pumping rates and layout of the designing wells. The developed Firefly-MODFLOW model is applied to minimize the cost of the dewatering project for the ancient mosque of Kerman city in Iran. Repetitive runs of the Firefly-MODFLOW model indicates that drilling two wells with the total rate of pumping 5503 m3/day is the result of the minimization problem. Results show that implementing the proposed solution leads to at least 1.5 m drawdown in the aquifer beneath mosque region. Also, the subsidence due to groundwater depletion is less than 80 mm. Sensitivity analyses indicate that desirable groundwater depletion has an enormous impact on total cost of the project. Besides, in a hypothetical aquifer decreasing the hydraulic conductivity contributes to decrease in total water extraction for dewatering.Keywords: groundwater dewatering, pumping wells, simulation-optimization, MODFLOW, firefly algorithm
Procedia PDF Downloads 29421754 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales
Authors: Philipp Sommer, Amgad Agoub
Abstract:
The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning
Procedia PDF Downloads 5721753 Validation of the Formal Model of Web Services Applications for Digital Reference Service of Library Information System
Authors: Zainab Magaji Musa, Nordin M. A. Rahman, Julaily Aida Jusoh
Abstract:
The web services applications for digital reference service (WSDRS) of LIS model is an informal model that claims to reduce the problems of digital reference services in libraries. It uses web services technology to provide efficient way of satisfying users’ needs in the reference section of libraries. The formal WSDRS model consists of the Z specifications of all the informal specifications of the model. This paper discusses the formal validation of the Z specifications of WSDRS model. The authors formally verify and thus validate the properties of the model using Z/EVES theorem prover.Keywords: validation, verification, formal, theorem prover
Procedia PDF Downloads 51621752 Identifying and Evaluating the Effectiveness of Communication Channels between Employees and Management Based on the EFQM Excellence Model
Authors: Mehrdad Hosseinishakib, Mozhgan Chakani, Gholamreza Babaei
Abstract:
This study aims to investigate the relationship between the bilateral communication channels, communication technologies with effective communications and communication technologies, employee participation in motivated decision-making of employees using the EFQM excellence model in Education Organization of Area 4 in Karaj. This research is an applied research in terms of the purpose and is a descriptive survey research in terms of nature and method and assesses the current situation using field studies. The statistical population consists of all employees and managers of Education Organization of Area 4 in Karaj including 5442 persons and random sampling was used and sample size is 359 using Cochran formula. Measurement tool is a researcher-made questionnaire with 20 questions including two categories of expertise and general questions. The first category includes general questions about respondents' personal characteristics such as gender and level of education, work experience and courses of study. The second category includes expertise questions of the questionnaire that have been designed to test research hypotheses that its reliability was approved by Cronbach's alpha coefficient 0.916 and its validity was approved according to the vies of teachers and some senior managers of Education Organization of Area 4 in Karaj. The results of the analysis of the findings show that there is a significant relationship between mutual communication channels, communication technologies with effective communication between employees and management. There is also a significant relationship between communication technologies and employee motivation and employee participation in their motivated decision-making in Education Organization of Area 4 in Karaj.Keywords: communication channels, effective communication, EFQM model, ANOVA
Procedia PDF Downloads 24421751 PointNetLK-OBB: A Point Cloud Registration Algorithm with High Accuracy
Authors: Wenhao Lan, Ning Li, Qiang Tong
Abstract:
To improve the registration accuracy of a source point cloud and template point cloud when the initial relative deflection angle is too large, a PointNetLK algorithm combined with an oriented bounding box (PointNetLK-OBB) is proposed. In this algorithm, the OBB of a 3D point cloud is used to represent the macro feature of source and template point clouds. Under the guidance of the iterative closest point algorithm, the OBB of the source and template point clouds is aligned, and a mirror symmetry effect is produced between them. According to the fitting degree of the source and template point clouds, the mirror symmetry plane is detected, and the optimal rotation and translation of the source point cloud is obtained to complete the 3D point cloud registration task. To verify the effectiveness of the proposed algorithm, a comparative experiment was performed using the publicly available ModelNet40 dataset. The experimental results demonstrate that, compared with PointNetLK, PointNetLK-OBB improves the registration accuracy of the source and template point clouds when the initial relative deflection angle is too large, and the sensitivity of the initial relative position between the source point cloud and template point cloud is reduced. The primary contribution of this paper is the use of PointNetLK to avoid the non-convex problem of traditional point cloud registration and leveraging the regularity of the OBB to avoid the local optimization problem in the PointNetLK context.Keywords: mirror symmetry, oriented bounding box, point cloud registration, PointNetLK-OBB
Procedia PDF Downloads 15021750 Laser-Dicing Modeling: Implementation of a High Accuracy Tool for Laser-Grooving and Cutting Application
Authors: Jeff Moussodji, Dominique Drouin
Abstract:
The highly complex technology requirements of today’s integrated circuits (ICs), lead to the increased use of several materials types such as metal structures, brittle and porous low-k materials which are used in both front end of line (FEOL) and back end of line (BEOL) process for wafer manufacturing. In order to singulate chip from wafer, a critical laser-grooving process, prior to blade dicing, is used to remove these layers of materials out of the dicing street. The combination of laser-grooving and blade dicing allows to reduce the potential risk of induced mechanical defects such micro-cracks, chipping, on the wafer top surface where circuitry is located. It seems, therefore, essential to have a fundamental understanding of the physics involving laser-dicing in order to maximize control of these critical process and reduce their undesirable effects on process efficiency, quality, and reliability. In this paper, the study was based on the convergence of two approaches, numerical and experimental studies which allowed us to investigate the interaction of a nanosecond pulsed laser and BEOL wafer materials. To evaluate this interaction, several laser grooved samples were compared with finite element modeling, in which three different aspects; phase change, thermo-mechanical and optic sensitive parameters were considered. The mathematical model makes it possible to highlight a groove profile (depth, width, etc.) of a single pulse or multi-pulses on BEOL wafer material. Moreover, the heat affected zone, and thermo-mechanical stress can be also predicted as a function of laser operating parameters (power, frequency, spot size, defocus, speed, etc.). After modeling validation and calibration, a satisfying correlation between experiment and modeling, results have been observed in terms of groove depth, width and heat affected zone. The study proposed in this work is a first step toward implementing a quick assessment tool for design and debug of multiple laser grooving conditions with limited experiments on hardware in industrial application. More correlations and validation tests are in progress and will be included in the full paper.Keywords: laser-dicing, nano-second pulsed laser, wafer multi-stack, multiphysics modeling
Procedia PDF Downloads 20921749 Linear MIMO Model Identification Using an Extended Kalman Filter
Authors: Matthew C. Best
Abstract:
Linear Multi-Input Multi-Output (MIMO) dynamic models can be identified, with no a priori knowledge of model structure or order, using a new Generalised Identifying Filter (GIF). Based on an Extended Kalman Filter, the new filter identifies the model iteratively, in a continuous modal canonical form, using only input and output time histories. The filter’s self-propagating state error covariance matrix allows easy determination of convergence and conditioning, and by progressively increasing model order, the best fitting reduced-order model can be identified. The method is shown to be resistant to noise and can easily be extended to identification of smoothly nonlinear systems.Keywords: system identification, Kalman filter, linear model, MIMO, model order reduction
Procedia PDF Downloads 59421748 Self-Tuning Power System Stabilizer Based on Recursive Least Square Identification and Linear Quadratic Regulator
Authors: J. Ritonja
Abstract:
Available commercial applications of power system stabilizers assure optimal damping of synchronous generator’s oscillations only in a small part of operating range. Parameters of the power system stabilizer are usually tuned for the selected operating point. Extensive variations of the synchronous generator’s operation result in changed dynamic characteristics. This is the reason that the power system stabilizer tuned for the nominal operating point does not satisfy preferred damping in the overall operation area. The small-signal stability and the transient stability of the synchronous generators have represented an attractive problem for testing different concepts of the modern control theory. Of all the methods, the adaptive control has proved to be the most suitable for the design of the power system stabilizers. The adaptive control has been used in order to assure the optimal damping through the entire synchronous generator’s operating range. The use of the adaptive control is possible because the loading variations and consequently the variations of the synchronous generator’s dynamic characteristics are, in most cases, essentially slower than the adaptation mechanism. The paper shows the development and the application of the self-tuning power system stabilizer based on recursive least square identification method and linear quadratic regulator. Identification method is used to calculate the parameters of the Heffron-Phillips model of the synchronous generator. On the basis of the calculated parameters of the synchronous generator’s mathematical model, the synthesis of the linear quadratic regulator is carried-out. The identification and the synthesis are implemented on-line. In this way, the self-tuning power system stabilizer adapts to the different operating conditions. A purpose of this paper is to contribute to development of the more effective power system stabilizers, which would replace currently used linear stabilizers. The presented self-tuning power system stabilizer makes the tuning of the controller parameters easier and assures damping improvement in the complete operating range. The results of simulations and experiments show essential improvement of the synchronous generator’s damping and power system stability.Keywords: adaptive control, linear quadratic regulator, power system stabilizer, recursive least square identification
Procedia PDF Downloads 24721747 Quantitative Assessment of the Motivating Impact of Divine Leadership on Followers
Authors: Parviz Abadi
Abstract:
There is evidence that leadership has been the subject of research since the 18th Century, with Thomas Carlyle’s proposal of the Great Man theory. Since that time there has been ample research on various theories and styles of leadership while the definition of leadership is still undergoing development. In this context, ‘Divine Leadership’ has been defined. Another aspect of organizational success has been deemed to be follower motivation. Consequently, the research's objective was to assess this leadership's impact by evaluating the relationship with follower motivation. This entailed proposing a theoretical model to depict several hypotheses. Subsequently, the research performed a quantitative study of the relationship of Divine Leadership with Follower Motivation. The findings yielded a conclusion indicating a high reliability of 95% for the data collected through the field survey. Moreover, Divine Leadership exhibited a statistically significant positive association with Follower Motivation. Furthermore, it was illustrated that Religiosity moderates the relationship between Divine Leadership and Motivation.Keywords: leadership, management, motivation, religiosity, followers
Procedia PDF Downloads 43