Search results for: information value method
24550 Nonlinear Free Surface Flow Simulations Using Smoothed Particle Hydrodynamics
Authors: Abdelraheem M. Aly, Minh Tuan Nguyen, Sang-Wook Lee
Abstract:
The incompressible smoothed particle hydrodynamics (ISPH) is used to simulate impact free surface flows. In the ISPH, pressure is evaluated by solving pressure Poisson equation using a semi-implicit algorithm based on the projection method. The current ISPH method is applied to simulate dam break flow over an inclined plane with different inclination angles. The effects of inclination angle in the velocity of wave front and pressure distribution is discussed. The impact of circular cylinder over water in tank has also been simulated using ISPH method. The computed pressures on the solid boundaries is studied and compared with the experimental results.Keywords: incompressible smoothed particle hydrodynamics, free surface flow, inclined plane, water entry impact
Procedia PDF Downloads 40324549 Application of Liquid Chromatographic Method for the in vitro Determination of Gastric and Intestinal Stability of Pure Andrographolide in the Extract of Andrographis paniculata
Authors: Vijay R. Patil, Sathiyanarayanan Lohidasan, K. R. Mahadik
Abstract:
Gastrointestinal stability of andrographolide was evaluated in vitro in simulated gastric (SGF) and intestinal (SIF) fluids using a validated HPLC-PDA method. The method was validated using a 5μm ThermoHypersil GOLD C18column (250 mm × 4.0 mm) and mobile phase consisting of water: acetonitrile; 70: 30 (v/v) delivered isocratically at a flow rate of 1 mL/min with UV detection at 228 nm. Andrographolide in pure form and extract Andrographis paniculata was incubated at 37°C in an incubator shaker in USP simulated gastric and intestinal fluids with and without enzymes. Systematic protocol as per FDA Guidance System was followed for stability study and samples were assayed at 0, 15, 30 and 60 min intervals for gastric and at 0, 15, 30, 60 min, 1, 2 and 3 h for intestinal stability study. Also, the stability study was performed up to 24 h to see the degradation pattern in SGF and SIF (with enzyme and without enzyme). The developed method was found to be accurate, precise and robust. Andrographolide was found to be stable in SGF (pH ∼ 1.2) for 1h and SIF (pH 6.8) up to 3 h. The relative difference (RD) of amount of drug added and found at all time points was found to be < 3%. The present study suggests that drug loss in the gastrointestinal tract takes place may be by membrane permeation rather than a degradation process.Keywords: andrographolide, Andrographis paniculata, in vitro, stability, gastric, Intestinal HPLC-PDA
Procedia PDF Downloads 24324548 Enhancing Fault Detection in Rotating Machinery Using Wiener-CNN Method
Authors: Mohamad R. Moshtagh, Ahmad Bagheri
Abstract:
Accurate fault detection in rotating machinery is of utmost importance to ensure optimal performance and prevent costly downtime in industrial applications. This study presents a robust fault detection system based on vibration data collected from rotating gears under various operating conditions. The considered scenarios include: (1) both gears being healthy, (2) one healthy gear and one faulty gear, and (3) introducing an imbalanced condition to a healthy gear. Vibration data was acquired using a Hentek 1008 device and stored in a CSV file. Python code implemented in the Spider environment was used for data preprocessing and analysis. Winner features were extracted using the Wiener feature selection method. These features were then employed in multiple machine learning algorithms, including Convolutional Neural Networks (CNN), Multilayer Perceptron (MLP), K-Nearest Neighbors (KNN), and Random Forest, to evaluate their performance in detecting and classifying faults in both the training and validation datasets. The comparative analysis of the methods revealed the superior performance of the Wiener-CNN approach. The Wiener-CNN method achieved a remarkable accuracy of 100% for both the two-class (healthy gear and faulty gear) and three-class (healthy gear, faulty gear, and imbalanced) scenarios in the training and validation datasets. In contrast, the other methods exhibited varying levels of accuracy. The Wiener-MLP method attained 100% accuracy for the two-class training dataset and 100% for the validation dataset. For the three-class scenario, the Wiener-MLP method demonstrated 100% accuracy in the training dataset and 95.3% accuracy in the validation dataset. The Wiener-KNN method yielded 96.3% accuracy for the two-class training dataset and 94.5% for the validation dataset. In the three-class scenario, it achieved 85.3% accuracy in the training dataset and 77.2% in the validation dataset. The Wiener-Random Forest method achieved 100% accuracy for the two-class training dataset and 85% for the validation dataset, while in the three-class training dataset, it attained 100% accuracy and 90.8% accuracy for the validation dataset. The exceptional accuracy demonstrated by the Wiener-CNN method underscores its effectiveness in accurately identifying and classifying fault conditions in rotating machinery. The proposed fault detection system utilizes vibration data analysis and advanced machine learning techniques to improve operational reliability and productivity. By adopting the Wiener-CNN method, industrial systems can benefit from enhanced fault detection capabilities, facilitating proactive maintenance and reducing equipment downtime.Keywords: fault detection, gearbox, machine learning, wiener method
Procedia PDF Downloads 8024547 A Nonstandard Finite Difference Method for Weather Derivatives Pricing Model
Authors: Clarinda Vitorino Nhangumbe, Fredericks Ebrahim, Betuel Canhanga
Abstract:
The price of an option weather derivatives can be approximated as a solution of the two-dimensional convection-diffusion dominant partial differential equation derived from the Ornstein-Uhlenbeck process, where one variable represents the weather dynamics and the other variable represent the underlying weather index. With appropriate financial boundary conditions, the solution of the pricing equation is approximated using a nonstandard finite difference method. It is shown that the proposed numerical scheme preserves positivity as well as stability and consistency. In order to illustrate the accuracy of the method, the numerical results are compared with other methods. The model is tested for real weather data.Keywords: nonstandard finite differences, Ornstein-Uhlenbeck process, partial differential equations approach, weather derivatives
Procedia PDF Downloads 10924546 Effect of Incentives on Knowledge Sharing and Learning: Evidence from the Indian IT Sector
Authors: Asish O. Mathew, Lewlyn L. R. Rodrigues
Abstract:
The organizations in the knowledge economy era have recognized the importance of building knowledge assets for sustainable growth and development. In comparison to other industries, Information Technology (IT) enterprises, holds an edge in developing an effective Knowledge Management (KM) program, thanks to their in-house technological abilities. This paper tries to study the various knowledge-based incentive programs and its effect on Knowledge Sharing and Learning in the context of the Indian IT sector. A conceptual model is developed linking KM incentives, knowledge sharing, and learning. A questionnaire study is conducted to collect primary data from the knowledge workers of the IT organizations located in India. The data was analysed using Structural Equation Modeling using Partial Least Square method. The results show a strong influence of knowledge management incentives on knowledge sharing and an indirect influence on learning.Keywords: knowledge management, knowledge management incentives, knowledge sharing, learning
Procedia PDF Downloads 47724545 Immediate Geometric Solution of Irregular Quadrilaterals: A Digital Tool Applied to Topography
Authors: Miguel Mariano Rivera Galvan
Abstract:
The purpose of this research was to create a digital tool by which users can obtain an immediate and accurate solution of the angular characteristics of an irregular quadrilateral. The development of this project arose because of the frequent absence of a polygon’s geometric information in land ownership accreditation documents. The researcher created a mathematical model using a linear approximation iterative method, employing various disciplines and techniques including trigonometry, geometry, algebra, and topography. This mathematical model uses as input data the surface of the quadrilateral, as well as the length of its sides, to obtain its interior angles and make possible its representation in a coordinate system. The results are as accurate and reliable as the user requires, offering the possibility of using this tool as a support to develop future engineering and architecture projects quickly and reliably.Keywords: digital tool, geometry, mathematical model, quadrilateral, solution
Procedia PDF Downloads 14624544 An Image Stitching Approach for Scoliosis Analysis
Authors: Siti Salbiah Samsudin, Hamzah Arof, Ainuddin Wahid Abdul Wahab, Mohd Yamani Idna Idris
Abstract:
Standard X-ray spine images produced by conventional screen-film technique have a limited field of view. This limitation may obstruct a complete inspection of the spine unless images of different parts of the spine are placed next to each other contiguously to form a complete structure. Another solution to producing a whole spine image is by assembling the digitized x-ray images of its parts automatically using image stitching. This paper presents a new Medical Image Stitching (MIS) method that utilizes Minimum Average Correlation Energy (MACE) filters to identify and merge pairs of x-ray medical images. The effectiveness of the proposed method is demonstrated in two sets of experiments involving two databases which contain a total of 40 pairs of overlapping and non-overlapping spine images. The experimental results are compared to those produced by the Normalized Cross Correlation (NCC) and Phase Only Correlation (POC) methods for comparison. It is found that the proposed method outperforms those of the NCC and POC methods in identifying both the overlapping and non-overlapping medical images. The efficacy of the proposed method is further vindicated by its average execution time which is about two to five times shorter than those of the POC and NCC methods.Keywords: image stitching, MACE filter, panorama image, scoliosis
Procedia PDF Downloads 45824543 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines
Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka
Abstract:
To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps
Procedia PDF Downloads 15124542 Towards a Framework for Embedded Weight Comparison Algorithm with Business Intelligence in the Plantation Domain
Authors: M. Pushparani, A. Sagaya
Abstract:
Embedded systems have emerged as important elements in various domains with extensive applications in automotive, commercial, consumer, healthcare and transportation markets, as there is emphasis on intelligent devices. On the other hand, Business Intelligence (BI) has also been extensively used in a range of applications, especially in the agriculture domain which is the area of this research. The aim of this research is to create a framework for Embedded Weight Comparison Algorithm with Business Intelligence (EWCA-BI). The weight comparison algorithm will be embedded within the plantation management system and the weighbridge system. This algorithm will be used to estimate the weight at the site and will be compared with the actual weight at the plantation. The algorithm will be used to build the necessary alerts when there is a discrepancy in the weight, thus enabling better decision making. In the current practice, data are collected from various locations in various forms. It is a challenge to consolidate data to obtain timely and accurate information for effective decision making. Adding to this, the unstable network connection leads to difficulty in getting timely accurate information. To overcome the challenges embedding is done on a portable device that will have the embedded weight comparison algorithm to also assist in data capture and synchronize data at various locations overcoming the network short comings at collection points. The EWCA-BI will provide real-time information at any given point of time, thus enabling non-latent BI reports that will provide crucial information to enable efficient operational decision making. This research has a high potential in bringing embedded system into the agriculture industry. EWCA-BI will provide BI reports with accurate information with uncompromised data using an embedded system and provide alerts, therefore, enabling effective operation management decision-making at the site.Keywords: embedded business intelligence, weight comparison algorithm, oil palm plantation, embedded systems
Procedia PDF Downloads 28524541 Wearable Music: Generation of Costumes from Music and Generative Art and Wearing Them by 3-Way Projectors
Authors: Noriki Amano
Abstract:
The final goal of this study is to create another way in which people enjoy music through the performance of 'Wearable Music'. Concretely speaking, we generate colorful costumes in real- time from music and to realize their dressing by projecting them to a person. For this purpose, we propose three methods in this study. First, a method of giving color to music in a three-dimensionally way. Second, a method of generating images of costumes from music. Third, a method of wearing the images of music. In particular, this study stands out from other related work in that we generate images of unique costumes from music and realize to wear them. In this study, we use the technique of generative arts to generate images of unique costumes and project the images to the fog generated around a person from 3-way using projectors. From this study, we can get how to enjoy music as 'wearable'. Furthermore, we are also able to have the prospect of unconventional entertainment based on the fusion between music and costumes.Keywords: entertainment computing, costumes, music, generative programming
Procedia PDF Downloads 17324540 Taguchi Method for Analyzing a Flexible Integrated Logistics Network
Authors: E. Behmanesh, J. Pannek
Abstract:
Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method
Procedia PDF Downloads 18724539 Analysis of Reflection of Elastic Waves in Three Dimensional Model Comprised with Viscoelastic Anisotropic Medium
Authors: Amares Chattopadhyay, Akanksha Srivastava
Abstract:
A unified approach to study the reflection of a plane wave in three-dimensional model comprised of the triclinic viscoelastic medium. The phase velocities of reflected qP, qSV and qSH wave have been calculated for the concerned medium by using the eigenvalue approach. The generalized method has been implemented to compute the complex form of amplitude ratios. Further, we discussed the nature of reflection coefficients of qP, qSV and qSH wave. The viscoelastic parameter, polar angle and azimuthal angle are found to be strongly influenced by amplitude ratios. The research article is particularly focused to study the effect of viscoelasticity associated with highly anisotropic media which exhibits the notable information about the reflection coefficients of qP, qSV, and qSH wave. The outcomes may further useful to the better exploration of all types of hydrocarbon reservoir and advancement in the field of reflection seismology.Keywords: amplitude ratios, three dimensional, triclinic, viscoelastic
Procedia PDF Downloads 23024538 Bright, Dark N-Soliton Solution of Fokas-Lenells Equation Using Hirota Bilinearization Method
Authors: Sagardeep Talukdar, Riki Dutta, Gautam Kumar Saharia, Sudipta Nandy
Abstract:
In non-linear optics, the Fokas-Lenells equation (FLE) is a well-known integrable equation that describes how ultrashort pulses move across the optical fiber. It admits localized wave solutions, just like any other integrable equation. We apply the Hirota bilinearization method to obtain the soliton solution of FLE. The proposed bilinearization makes use of an auxiliary function. We apply the method to FLE with a vanishing boundary condition, that is, to obtain a bright soliton solution. We have obtained bright 1-soliton and 2-soliton solutions and propose a scheme for obtaining an N-soliton solution. We have used an additional parameter that is responsible for the shift in the position of the soliton. Further analysis of the 2-soliton solution is done by asymptotic analysis. In the non-vanishing boundary condition, we obtain the dark 1-soliton solution. We discover that the suggested bilinearization approach, which makes use of the auxiliary function, greatly simplifies the process while still producing the desired outcome. We think that the current analysis will be helpful in understanding how FLE is used in nonlinear optics and other areas of physics.Keywords: asymptotic analysis, fokas-lenells equation, hirota bilinearization method, soliton
Procedia PDF Downloads 11224537 Risk Management in Industrial Supervision Projects
Authors: Érick Aragão Ribeiro, George André Pereira Thé, José Marques Soares
Abstract:
Several problems in industrial supervision software development projects may lead to the delay or cancellation of projects. These problems can be avoided or contained by using identification methods, analysis and control of risks. These procedures can give an overview of the possible problems that can happen in the projects and what are the immediate solutions. Therefore, we propose a risk management method applied to the teaching and development of industrial supervision software. The method is developed through a literature review and previous projects can be divided into phases of management and have basic features that are validated with experimental research carried out by mechatronics engineering students and professionals. The management is conducted through the stages of identification, analysis, planning, monitoring, control and communication of risks. Programmers use a method of prioritizing risks considering the gravity and the possibility of occurrence of the risk. The outputs of the method indicate which risks occurred or are about to happen. The first results indicate which risks occur at different stages of the project and what risks have a high probability of occurring. The results show the efficiency of the proposed method compared to other methods, showing the improvement of software quality and leading developers in their decisions. This new way of developing supervision software helps students identify design problems, evaluate software developed and propose effective solutions. We conclude that the risk management optimizes the development of the industrial process control software and provides higher quality to the product.Keywords: supervision software, risk management, industrial supervision, project management
Procedia PDF Downloads 35624536 Cloud Data Security Using Map/Reduce Implementation of Secret Sharing Schemes
Authors: Sara Ibn El Ahrache, Tajje-eddine Rachidi, Hassan Badir, Abderrahmane Sbihi
Abstract:
Recently, there has been increasing confidence for a favorable usage of big data drawn out from the huge amount of information deposited in a cloud computing system. Data kept on such systems can be retrieved through the network at the user’s convenience. However, the data that users send include private information, and therefore, information leakage from these data is now a major social problem. The usage of secret sharing schemes for cloud computing have lately been approved to be relevant in which users deal out their data to several servers. Notably, in a (k,n) threshold scheme, data security is assured if and only if all through the whole life of the secret the opponent cannot compromise more than k of the n servers. In fact, a number of secret sharing algorithms have been suggested to deal with these security issues. In this paper, we present a Mapreduce implementation of Shamir’s secret sharing scheme to increase its performance and to achieve optimal security for cloud data. Different tests were run and through it has been demonstrated the contributions of the proposed approach. These contributions are quite considerable in terms of both security and performance.Keywords: cloud computing, data security, Mapreduce, Shamir's secret sharing
Procedia PDF Downloads 30624535 Satellite Imagery Classification Based on Deep Convolution Network
Authors: Zhong Ma, Zhuping Wang, Congxin Liu, Xiangzeng Liu
Abstract:
Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.Keywords: satellite imagery classification, deep convolution network, genetic algorithm, hyper-parameter optimization
Procedia PDF Downloads 30024534 “I” on the Web: Social Penetration Theory Revised
Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology
Abstract:
The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information
Procedia PDF Downloads 37124533 Numerical Calculation of Dynamic Response of Catamaran Vessels Based on 3D Green Function Method
Authors: Md. Moinul Islam, N. M. Golam Zakaria
Abstract:
Seakeeping analysis of catamaran vessels in the earlier stages of design has become an important issue as it dictates the seakeeping characteristics, and it ensures safe navigation during the voyage. In the present paper, a 3D numerical method for the seakeeping prediction of catamaran vessel is presented using the 3D Green Function method. Both steady and unsteady potential flow problem is dealt with here. Using 3D linearized potential theory, the dynamic wave loads and the subsequent response of the vessel is computed. For validation of the numerical procedure catamaran vessel composed of twin, Wigley form demi-hull is used. The results of the present calculation are compared with the available experimental data and also with other calculations. The numerical procedure is also carried out for NPL-based round bilge catamaran, and hydrodynamic coefficients along with heave and pitch motion responses are presented for various Froude number. The results obtained by the present numerical method are found to be in fairly good agreement with the available data. This can be used as a design tool for predicting the seakeeping behavior of catamaran ships in waves.Keywords: catamaran, hydrodynamic coefficients , motion response, 3D green function
Procedia PDF Downloads 22024532 The Use of Crisis Workplace Technology to Protect Communication Processes of Critical Infrastructure
Authors: Jiri Barta, Jiří F. Urbanek
Abstract:
This paper deals with a protection of the national and European infrastructure. It is issue nowadays. The paper deals with the perspectives and possibilities of "smart solutions" to critical infrastructure protection. The research project deals with computers aided technologies are used from the perspective of new, better protection of selected infrastructure objects. Protection is focused on communication and information channels. These communication and information channels are very important for the functioning of the system of protection of critical infrastructure elements.Keywords: interoperability, communication systems, controlling proces, critical infrastructure, crisis workplaces, continuity
Procedia PDF Downloads 29924531 A Sectional Control Method to Decrease the Accumulated Survey Error of Tunnel Installation Control Network
Authors: Yinggang Guo, Zongchun Li
Abstract:
In order to decrease the accumulated survey error of tunnel installation control network of particle accelerator, a sectional control method is proposed. Firstly, the accumulation rule of positional error with the length of the control network is obtained by simulation calculation according to the shape of the tunnel installation-control-network. Then, the RMS of horizontal positional precision of tunnel backbone control network is taken as the threshold. When the accumulated error is bigger than the threshold, the tunnel installation control network should be divided into subsections reasonably. On each segment, the middle survey station is taken as the datum for independent adjustment calculation. Finally, by taking the backbone control points as faint datums, the weighted partial parameters adjustment is performed with the adjustment results of each segment and the coordinates of backbone control points. The subsections are jointed and unified into the global coordinate system in the adjustment process. An installation control network of the linac with a length of 1.6 km is simulated. The RMS of positional deviation of the proposed method is 2.583 mm, and the RMS of the difference of positional deviation between adjacent points reaches 0.035 mm. Experimental results show that the proposed sectional control method can not only effectively decrease the accumulated survey error but also guarantee the relative positional precision of the installation control network. So it can be applied in the data processing of tunnel installation control networks, especially for large particle accelerators.Keywords: alignment, tunnel installation control network, accumulated survey error, sectional control method, datum
Procedia PDF Downloads 19124530 Optimized Real Ground Motion Scaling for Vulnerability Assessment of Building Considering the Spectral Uncertainty and Shape
Authors: Chen Bo, Wen Zengping
Abstract:
Based on the results of previous studies, we focus on the research of real ground motion selection and scaling method for structural performance-based seismic evaluation using nonlinear dynamic analysis. The input of earthquake ground motion should be determined appropriately to make them compatible with the site-specific hazard level considered. Thus, an optimized selection and scaling method are established including the use of not only Monte Carlo simulation method to create the stochastic simulation spectrum considering the multivariate lognormal distribution of target spectrum, but also a spectral shape parameter. Its applications in structural fragility analysis are demonstrated through case studies. Compared to the previous scheme with no consideration of the uncertainty of target spectrum, the method shown here can make sure that the selected records are in good agreement with the median value, standard deviation and spectral correction of the target spectrum, and greatly reveal the uncertainty feature of site-specific hazard level. Meanwhile, it can help improve computational efficiency and matching accuracy. Given the important infection of target spectrum’s uncertainty on structural seismic fragility analysis, this work can provide the reasonable and reliable basis for structural seismic evaluation under scenario earthquake environment.Keywords: ground motion selection, scaling method, seismic fragility analysis, spectral shape
Procedia PDF Downloads 29324529 Multi-Response Optimization of CNC Milling Parameters Using Taguchi Based Grey Relational Analysis for AA6061 T6 Aluminium Alloy
Authors: Varsha Singh, Kishan Fuse
Abstract:
This paper presents a study of the grey-Taguchi method to optimize CNC milling parameters of AA6061 T6 aluminium alloy. Grey-Taguchi method combines Taguchi method based design of experiments (DOE) with grey relational analysis (GRA). Multi-response optimization of different quality characteristics as surface roughness, material removal rate, cutting forces is done using grey relational analysis (GRA). The milling parameters considered for experiments include cutting speed, feed per tooth, and depth of cut. Each parameter with three levels is selected. A grey relational grade is used to estimate overall quality characteristics performance. The Taguchi’s L9 orthogonal array is used for design of experiments. MINITAB 17 software is used for optimization. Analysis of variance (ANOVA) is used to identify most influencing parameter. The experimental results show that grey relational analysis is effective method for optimizing multi-response characteristics. Optimum results are finally validated by performing confirmation test.Keywords: ANOVA, CNC milling, grey relational analysis, multi-response optimization
Procedia PDF Downloads 30724528 A Pilot Randomized Controlled Trial of a Physical Activity Intervention in a Low Socioeconomic Population: Focus on Mental Contrasting with Implementation Intentions
Authors: Shaun G. Abbott, Rebecca C. Reynolds, John B. F. de Wit
Abstract:
Low physical activity (PA) levels are a major public health concern in Australia. There is some evidence that PA interventions can increase PA levels via various methods, including online delivery. Low Socioeconomic Status (SES) people participate in less PA than the rest of the population, partly due to poor self-regulation behaviors associated with socioeconomic characteristics. Interventions that involve a particular method of self-regulation, Mental Contrasting with Implementation Intentions (MCII), has regularly achieved healthy behavior change, but few studies focus on PA behavior outcomes and no studies examining the effect of MCII on the PA behaviors of low SES people has been done. In this study, a pilot randomized controlled trial (RCT) will deliver MCII for PA behavior change to individuals of relative disadvantage for the first time. The current pilot study will predict sample size for a future full RCT and test the hypothesis that sedentary participants from areas of relative socioeconomic disadvantage of Sydney, who learn the MCII technique will be more physically active, have improved anthropometry and psychological indicators at the completion of a 12-week intervention compared to baseline and control. Eligible participants of relative socioeconomic disadvantage will be randomly assigned to either the ‘PA Information Plus MCII Intervention Group’ or a ‘PA Information-Only Control Group’. Both groups will attend a baseline and 12-week face-to-face consultation; where PA, anthropometric and psychological data will be gathered. The intervention group will be guided through an MCII session at the baseline appointment to establish a PA goal to aim to achieve over 12 weeks. Other than these baseline and 12-week consultations, all participant interaction will occur online. All participants will receive a ‘Fitbit’ accelerometer to record objectively. PA as a daily step count, along with a PA diary for the duration of the study. PA data will be recorded on a personalized online spreadsheet. Both groups will receive a standard PA information email at weeks 2, 4, and 8. The intervention group will also receive scripted follow-up online appointments to discuss goal progress. The current pilot study is in recruitment stage with findings to be presented at the conference in December if selected.Keywords: implementation intentions, mental contrasting, motivation, pedometer, physical activity, socioeconomic
Procedia PDF Downloads 30624527 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid
Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang
Abstract:
Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid
Procedia PDF Downloads 43224526 Stress and Strain Analysis of Notched Bodies Subject to Non-Proportional Loadings
Authors: Ayhan Ince
Abstract:
In this paper, an analytical simplified method for calculating elasto-plastic stresses strains of notched bodies subject to non-proportional loading paths is discussed. The method was based on the Neuber notch correction, which relates the incremental elastic and elastic-plastic strain energy densities at the notch root and the material constitutive relationship. The validity of the method was presented by comparing computed results of the proposed model against finite element numerical data of notched shaft. The comparison showed that the model estimated notch-root elasto-plastic stresses strains with good accuracy using linear-elastic stresses. The prosed model provides more efficient and simple analysis method preferable to expensive experimental component tests and more complex and time consuming incremental non-linear FE analysis. The model is particularly suitable to perform fatigue life and fatigue damage estimates of notched components subjected to non-proportional loading paths.Keywords: elasto-plastic, stress-strain, notch analysis, nonprortional loadings, cyclic plasticity, fatigue
Procedia PDF Downloads 46624525 A Theoretical Study of Accelerating Neutrons in LINAC Using Magnetic Gradient Method
Authors: Chunduru Amareswara Prasad
Abstract:
The main aim of this proposal it to reveal the secrets of the universe by accelerating neutrons. The proposal idea in its abridged version speaks about the possibility of making neutrons accelerate with help of thermal energy and magnetic energy under controlled conditions. Which is helpful in revealing the hidden secrets of the universe namely dark energy and in finding properties of Higgs boson. The paper mainly speaks about accelerating neutrons to near velocity of light in a LINAC, using magnetic energy by magnetic pressurizers. The center of mass energy of two colliding neutron beams is 94 GeV (~0.5c) can be achieved using this method. The conventional ways to accelerate neutrons has some constraints in accelerating them electromagnetically as they need to be separated from the Tritium or Deuterium nuclei. This magnetic gradient method provides efficient and simple way to accelerate neutrons.Keywords: neutron, acceleration, thermal energy, magnetic energy, Higgs boson
Procedia PDF Downloads 32624524 Transparency Obligations under the AI Act Proposal: A Critical Legal Analysis
Authors: Michael Lognoul
Abstract:
In April 2021, the European Commission released its AI Act Proposal, which is the first policy proposal at the European Union level to target AI systems comprehensively, in a horizontal manner. This Proposal notably aims to achieve an ecosystem of trust in the European Union, based on the respect of fundamental rights, regarding AI. Among many other requirements, the AI Act Proposal aims to impose several generic transparency obligationson all AI systems to the benefit of natural persons facing those systems (e.g. information on the AI nature of systems, in case of an interaction with a human). The Proposal also provides for more stringent transparency obligations, specific to AI systems that qualify as high-risk, to the benefit of their users, notably on the characteristics, capabilities, and limitations of the AI systems they use. Against that background, this research firstly presents all such transparency requirements in turn, as well as related obligations, such asthe proposed obligations on record keeping. Secondly, it focuses on a legal analysis of their scope of application, of the content of the obligations, and on their practical implications. On the scope of transparency obligations tailored for high-risk AI systems, the research notably notes that it seems relatively narrow, given the proposed legal definition of the notion of users of AI systems. Hence, where end-users do not qualify as users, they may only receive very limited information. This element might potentially raise concern regarding the objective of the Proposal. On the content of the transparency obligations, the research highlights that the information that should benefit users of high-risk AI systems is both very broad and specific, from a technical perspective. Therefore, the information required under those obligations seems to create, prima facie, an adequate framework to ensure trust for users of high-risk AI systems. However, on the practical implications of these transparency obligations, the research notes that concern arises due to potential illiteracy of high-risk AI systems users. They might not benefit from sufficient technical expertise to fully understand the information provided to them, despite the wording of the Proposal, which requires that information should be comprehensible to its recipients (i.e. users).On this matter, the research points that there could be, more broadly, an important divergence between the level of detail of the information required by the Proposal and the level of expertise of users of high-risk AI systems. As a conclusion, the research provides policy recommendations to tackle (part of) the issues highlighted. It notably recommends to broaden the scope of transparency requirements for high-risk AI systems to encompass end-users. It also suggests that principles of explanation, as they were put forward in the Guidelines for Trustworthy AI of the High Level Expert Group, should be included in the Proposal in addition to transparency obligations.Keywords: aI act proposal, explainability of aI, high-risk aI systems, transparency requirements
Procedia PDF Downloads 31624523 A TFETI Domain Decompositon Solver for von Mises Elastoplasticity Model with Combination of Linear Isotropic-Kinematic Hardening
Authors: Martin Cermak, Stanislav Sysala
Abstract:
In this paper we present the efficient parallel implementation of elastoplastic problems based on the TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. This approach allow us to use parallel solution and compute this nonlinear problem on the supercomputers and decrease the solution time and compute problems with millions of DOFs. In our approach we consider an associated elastoplastic model with the von Mises plastic criterion and the combination of linear isotropic-kinematic hardening law. This model is discretized by the implicit Euler method in time and by the finite element method in space. We consider the system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI. The implementation of this problem is realized in our in-house MatSol packages developed in MATLAB.Keywords: isotropic-kinematic hardening, TFETI, domain decomposition, parallel solution
Procedia PDF Downloads 42024522 Dynamic Wind Effects in Tall Buildings: A Comparative Study of Synthetic Wind and Brazilian Wind Standard
Authors: Byl Farney Cunha Junior
Abstract:
In this work the dynamic three-dimensional analysis of a 47-story building located in Goiania city when subjected to wind loads generated using both the Wind Brazilian code, NBR6123 (ABNT, 1988) and the Synthetic-Wind method is realized. To model the frames three different methodologies are used: the shear building model and both bi and three-dimensional finite element models. To start the analysis, a plane frame is initially studied to validate the shear building model and, in order to compare the results of natural frequencies and displacements at the top of the structure the same plane frame was modeled using the finite element method through the SAP2000 V10 software. The same steps were applied to an idealized 20-story spacial frame that helps in the presentation of the stiffness correction process applied to columns. Based on these models the two methods used to generate the Wind loads are presented: a discrete model proposed in the Wind Brazilian code, NBR6123 (ABNT, 1988) and the Synthetic-Wind method. The method uses the Davenport spectrum which is divided into a variety of frequencies to generate the temporal series of loads. Finally, the 47- story building was analyzed using both the three-dimensional finite element method through the SAP2000 V10 software and the shear building model. The models were loaded with Wind load generated by the Wind code NBR6123 (ABNT, 1988) and by the Synthetic-Wind method considering different wind directions. The displacements and internal forces in columns and beams were compared and a comparative study considering a situation of a full elevated reservoir is realized. As can be observed the displacements obtained by the SAP2000 V10 model are greater when loaded with NBR6123 (ABNT, 1988) wind load related to the permanent phase of the structure’s response.Keywords: finite element method, synthetic wind, tall buildings, shear building
Procedia PDF Downloads 27324521 Rumour Containment Using Monitor Placement and Truth Propagation
Authors: Amrah Maryam
Abstract:
The emergence of online social networks (OSNs) has transformed the way we pursue and share information. On the one hand, OSNs provide great ease for the spreading of positive information while, on the other hand, they may also become a channel for the spreading of malicious rumors and misinformation throughout the social network. Thus, to assure the trustworthiness of OSNs to its users, it is of vital importance to detect the misinformation propagation in the network by placing network monitors. In this paper, we aim to place monitors near the suspected nodes with the intent to limit the diffusion of misinformation in the social network, and then we also detect the most significant nodes in the network for propagating true information in order to minimize the effect of already diffused misinformation. Thus, we initiate two heuristic monitor placement using articulation points and truth propagation using eigenvector centrality. Furthermore, to provide real-time workings of the system, we integrate both the monitor placement and truth propagation entities as well. To signify the effectiveness of the approaches, we have carried out the experiment and evaluation of Stanford datasets of online social networks.Keywords: online social networks, monitor placement, independent cascade model, spread of misinformation
Procedia PDF Downloads 161