Search results for: Additional Damping Method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20633

Search results for: Additional Damping Method

19883 Stability Analysis and Experimental Evaluation on Maxwell Model of Impedance Control

Authors: Le Fu, Rui Wu, Gang Feng Liu, Jie Zhao

Abstract:

Normally, impedance control methods are based on a model that connects a spring and damper in parallel. The series connection, namely the Maxwell model, has emerged as a counterpart and draw the attention of robotics researchers. In the theoretical analysis, it turns out that the two pattern are both equivalents to some extent, but notable differences of response characteristics exist, especially in the effect of damping viscosity. However, this novel impedance control design is lack of validation on realistic robot platforms. In this study, stability analysis and experimental evaluation are achieved using a 3-fingered Barrett® robotic hand BH8-282 endowed with tactile sensing, mounted on a torque-controlled lightweight and collaborative robot KUKA® LBR iiwa 14 R820. Object handover and incoming objects catching tasks are executed for validation and analysis. Experimental results show that the series connection pattern has much better performance in natural impact or shock absorption, which indicate promising applications in robots’ safe and physical interaction with humans and objects in various environments.

Keywords: impedance control, Maxwell model, force control, dexterous manipulation

Procedia PDF Downloads 489
19882 A Principal-Agent Model for Sharing Mechanism in Integrated Project Delivery Context

Authors: Shan Li, Qiuwen Ma

Abstract:

Integrated project delivery (IPD) is a project delivery method distinguished by a shared risk/rewards mechanism and multiparty agreement. IPD has drawn increasingly attention from construction industry because of its efficiency of solving adversarial problems and reliability to deliver high-performing buildings. However, some evidence showed that some project participants obtained less profit from IPD projects than the typical projects. They attributed it to the unfair IPD sharing mechanism, which resulted in additional time and cost of negotiation on the sharing fractions among project participants. The study is aimed to investigate the reward distribution by constructing a principal-agent model. Based on cooperative game theory, it is examined how to distribute the shared project rewards between client and non-client parties, and identify the sharing fractions among non-client parties. It is found that at least half of the project savings should be allocated to the non-client parties to motivate them to create more project value. Second, the client should raise his sharing fractions when the integration among project participants is efficient. In addition, the client should allocate higher sharing fractions to the non-client party who is more able. This study can help the IPD project participants make fair and motivated sharing mechanisms.

Keywords: cooperative game theory, IPD, principal agent model, sharing mechanism

Procedia PDF Downloads 273
19881 Thermal Behaviors of the Strong Form Factors of Charmonium and Charmed Beauty Mesons from Three Point Sum Rules

Authors: E. Yazıcı, H. Sundu, E. Veli Veliev

Abstract:

In order to understand the nature of strong interactions and QCD vacuum, investigation of the meson coupling constants have an important role. The knowledge on the temperature dependence of the form factors is very important for the interpretation of heavy-ion collision experiments. Also, more accurate determination of these coupling constants plays a crucial role in understanding of the hadronic decays. With the increasing of CM energies of the experiments, researches on meson interactions have become one of the more interesting problems of hadronic physics. In this study, we analyze the temperature dependence of the strong form factor of the BcBcJ/ψ vertex using the three point QCD sum rules method. Here, we assume that with replacing the vacuum condensates and also the continuum threshold by their thermal version, the sum rules for the observables remain valid. In calculations, we take into account the additional operators, which appear in the Wilson expansion at finite temperature. We also investigated the momentum dependence of the form factor at T = 0, fit it into an analytic function, and extrapolate into the deep time-like region in order to obtain a strong coupling constant of the vertex. Our results are consistent with the results existing in the literature.

Keywords: QCD sum rules, thermal QCD, heavy mesons, strong coupling constants

Procedia PDF Downloads 176
19880 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 384
19879 The Advancement of Smart Cushion Product and System Design Enhancing Public Health and Well-Being at Workplace

Authors: Dosun Shin, Assegid Kidane, Pavan Turaga

Abstract:

According to the National Institute of Health, living a sedentary lifestyle leads to a number of health issues, including increased risk of cardiovascular dis-ease, type 2 diabetes, obesity, and certain types of cancers. This project brings together experts in multiple disciplines to bring product design, sensor design, algorithms, and health intervention studies to develop a product and system that helps reduce the amount of time sitting at the workplace. This paper illustrates ongoing improvements to prototypes the research team developed in initial research; including working prototypes with a software application, which were developed and demonstrated for users. Additional modifications were made to improve functionality, aesthetics, and ease of use, which will be discussed in this paper. Extending on the foundations created in the initial phase, our approach sought to further improve the product by conducting additional human factor research, studying deficiencies in competitive products, testing various materials/forms, developing working prototypes, and obtaining feedback from additional potential users. The solution consisted of an aesthetically pleasing seat cover cushion that easily attaches to common office chairs found in most workplaces, ensuring a wide variety of people can use the product. The product discreetly contains sensors that track when the user sits on their chair, sending information to a phone app that triggers reminders for users to stand up and move around after sitting for a set amount of time. This paper also presents the analyzed typical office aesthetics and selected materials, colors, and forms that complimented the working environment. Comfort and ease of use remained a high priority as the design team sought to provide a product and system that integrated into the workplace. As the research team continues to test, improve, and implement this solution for the sedentary workplace, the team seeks to create a viable product that acts as an impetus for a more active workday and lifestyle, further decreasing the proliferation of chronic disease and health issues for sedentary working people. This paper illustrates in detail the processes of engineering, product design, methodology, and testing results.

Keywords: anti-sedentary work behavior, new product development, sensor design, health intervention studies

Procedia PDF Downloads 139
19878 Efficacy of Collagen Matrix Implants in Phacotrabeculectomy with Mitomycin C at One Year

Authors: Lalit Tejwani, Reetika Sharma, Arun Singhvi, Himanshu Shekhar

Abstract:

Purpose: To assess the efficacy of collagen matrix implant (Ologen) in phacotrabeculectomy augmented with mitomycin C (MMC). Methods: A biodegradable collagen matrix (Ologen) was placed in the subconjunctival and subscleral space in twenty-two eyes of 22 patients with glaucoma and cataract who underwent combined phacoemulsification and trabeculectomy augmented with MMC. All of them were examined preoperatively and on the first postoperative day. They were followed for twelve months after surgery. Any intervention needed in follow-up period was noted. Any complication was recorded. The primary outcome measure was postoperative intraocular pressure at one year follow-up. Any additional postoperative treatments needed and adverse events were noted. Results: The mean age of patients included in the study was 57.77 ± 9.68 years (range=36 to 70 years). All the patients were followed for at least one year. Three patients had history of failed trabeculectomy. Fifteen patients had chronic angle closure glaucoma with cataract, five had primary open angle glaucoma with cataract, one had uveitic glaucoma with cataract, and one had juvenile open angle glaucoma with cataract. Mean preoperative IOP was 32.63 ± 8.29 mm Hg, eighteen patients were on oral antiglaucoma medicines. The mean postoperative IOP was 10.09 ± 2.65 mm Hg at three months, 10.36 ± 2.19 mm Hg at six months and 11.36 ± 2.72 mm Hg at one year follow up. No adverse effect related to Ologen was seen. Anterior chamber reformation was done in five patients, and three needed needling of bleb. Four patients needed additional antiglaucoma medications in the follow-up period. Conclusions: Combined phacotrabeculectomy with MMC with Ologen implantation appears to be a safe and effective option in glaucoma patients needing trabeculectomy with significant cataract. Comparative studies with longer duration of follow-up in larger number of patients are needed.

Keywords: combined surgery, ologen, phacotrabeculectomy, success

Procedia PDF Downloads 190
19877 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 308
19876 Exploring Gaming-Learning Interaction in MMOG Using Data Mining Methods

Authors: Meng-Tzu Cheng, Louisa Rosenheck, Chen-Yen Lin, Eric Klopfer

Abstract:

The purpose of the research is to explore some of the ways in which gameplay data can be analyzed to yield results that feedback into the learning ecosystem. Back-end data for all users as they played an MMOG, The Radix Endeavor, was collected, and this study reports the analyses on a specific genetics quest by using the data mining techniques, including the decision tree method. In the study, different reasons for quest failure between participants who eventually succeeded and who never succeeded were revealed. Regarding the in-game tools use, trait examiner was a key tool in the quest completion process. Subsequently, the results of decision tree showed that a lack of trait examiner usage can be made up with additional Punnett square uses, displaying multiple pathways to success in this quest. The methods of analysis used in this study and the resulting usage patterns indicate some useful ways that gameplay data can provide insights in two main areas. The first is for game designers to know how players are interacting with and learning from their game. The second is for players themselves as well as their teachers to get information on how they are progressing through the game, and to provide help they may need based on strategies and misconceptions identified in the data.

Keywords: MMOG, decision tree, genetics, gaming-learning interaction

Procedia PDF Downloads 340
19875 Sparse Modelling of Cancer Patients’ Survival Based on Genomic Copy Number Alterations

Authors: Khaled M. Alqahtani

Abstract:

Copy number alterations (CNA) are variations in the structure of the genome, where certain regions deviate from the typical two chromosomal copies. These alterations are pivotal in understanding tumor progression and are indicative of patients' survival outcomes. However, effectively modeling patients' survival based on their genomic CNA profiles while identifying relevant genomic regions remains a statistical challenge. Various methods, such as the Cox proportional hazard (PH) model with ridge, lasso, or elastic net penalties, have been proposed but often overlook the inherent dependencies between genomic regions, leading to results that are hard to interpret. In this study, we enhance the elastic net penalty by incorporating an additional penalty that accounts for these dependencies. This approach yields smooth parameter estimates and facilitates variable selection, resulting in a sparse solution. Our findings demonstrate that this method outperforms other models in predicting survival outcomes, as evidenced by our simulation study. Moreover, it allows for a more meaningful interpretation of genomic regions associated with patients' survival. We demonstrate the efficacy of our approach using both real data from a lung cancer cohort and simulated datasets.

Keywords: copy number alterations, cox proportional hazard, lung cancer, regression, sparse solution

Procedia PDF Downloads 24
19874 Techniques for Seismic Strengthening of Historical Monuments from Diagnosis to Implementation

Authors: Mircan Kaya

Abstract:

A multi-disciplinary approach is required in any intervention project for historical monuments. Due to the complexity of their geometry, the variable and unpredictable characteristics of original materials used in their creation, heritage structures are peculiar. Their histories are often complex, and they require correct diagnoses to decide on the techniques of intervention. This approach should not only combine technical aspects but also historical research that may help discover phenomena involving structural issues, and acquire a knowledge of the structure on its concept, method of construction, previous interventions, process of damage, and its current state. In addition to the traditional techniques like bed joint reinforcement, the repairing, strengthening and restoration of historical buildings may require several other modern methods which may be described as innovative techniques like pre-stressing and post-tensioning, use of shape memory alloy devices and shock transmission units, shoring, drilling, and the use of stainless steel or titanium. Regardless of the method to be incorporated in the strengthening process, which can be traditional or innovative, it is crucial to recognize that structural strengthening is the process of upgrading the structural system of the existing building with the aim of improving its performance under existing and additional loads like seismic loads. This process is much more complex than dealing with a new construction, owing to the fact that there are several unknown factors associated with the structural system. Material properties, load paths, previous interventions, existing reinforcement are especially important matters to be considered. There are several examples of seismic strengthening with traditional and innovative techniques around the world, which will be discussed in this paper in detail, including their pros and cons. Ultimately, however, the main idea underlying the philosophy of a successful intervention with the most appropriate techniques of strengthening a historic monument should be decided by a proper assessment of the specific needs of the building.

Keywords: bed joint reinforcement, historical monuments, post-tensioning, pre-stressing, seismic strengthening, shape memory alloy devices, shock transmitters, tie rods

Procedia PDF Downloads 242
19873 Implementation of a Method of Crater Detection Using Principal Component Analysis in FPGA

Authors: Izuru Nomura, Tatsuya Takino, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata

Abstract:

We propose a method of crater detection from the image of the lunar surface captured by the small space probe. We use the principal component analysis (PCA) to detect craters. Nevertheless, considering severe environment of the space, it is impossible to use generic computer in practice. Accordingly, we have to implement the method in FPGA. This paper compares FPGA and generic computer by the processing time of a method of crater detection using principal component analysis.

Keywords: crater, PCA, eigenvector, strength value, FPGA, processing time

Procedia PDF Downloads 535
19872 The Impact of Blended Learning on the Perception of High School Learners Towards Entrepreneurship

Authors: Rylyne Mande Nchu, Robertson Tengeh, Chux Iwu

Abstract:

Blended learning is a global phenomenon and is essential to many institutes of learning as an additional method of teaching that complements more traditional methods of learning. In this paper, the lack of practice of a blended learning approach to entrepreneurship education and how it impacts learners' perception of being entrepreneurial. E-learning is in its infancy within the secondary and high school sectors in South Africa. The conceptual framework of the study is based on theoretical aspects of systemic-constructivist learning implemented in an interactive online learning environment in an entrepreneurship education subject. The formative evaluation research was conducted implementing mixed methods of research (quantitative and qualitative) and it comprised a survey of high school learners and informant interviewing with entrepreneurs. Theoretical analysis of literature provides features necessary for creating interactive blended learning environments to be used in entrepreneurship education subject. Findings of the study show that learners do not always objectively evaluate their capacities. Special attention has to be paid to the development of learners’ computer literacy as well as to the activities that would bring online learning to practical training. Needs analysis shows that incorporating blended learning in entrepreneurship education may have a positive perception of entrepreneurship.

Keywords: blended learning, entrepreneurship education, entrepreneurship intention, entrepreneurial skills

Procedia PDF Downloads 94
19871 MapReduce Logistic Regression Algorithms with RHadoop

Authors: Byung Ho Jung, Dong Hoon Lim

Abstract:

Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.

Keywords: big data, logistic regression, MapReduce, RHadoop

Procedia PDF Downloads 258
19870 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 133
19869 An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries

Authors: Evangelos G. Karvelas, Christos Liosis, Andreas Theodorakakos, Theodoros E. Karakasidis

Abstract:

In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.

Keywords: computational fluid dynamics, CFD, covariance matrix adaptation evolution strategy, discrete element method, DEM, magnetic navigation, spherical particles

Procedia PDF Downloads 123
19868 Effect of Type of Pile and Its Installation Method on Pile Bearing Capacity by Physical Modelling in Frustum Confining Vessel

Authors: Seyed Abolhasan Naeini, M. Mortezaee

Abstract:

Various factors such as the method of installation, the pile type, the pile material and the pile shape, can affect the final bearing capacity of a pile executed in the soil; among them, the method of installation is of special importance. The physical modeling is among the best options in the laboratory study of the piles behavior. Therefore, the current paper first presents and reviews the frustum confining vesel (FCV) as a suitable tool for physical modeling of deep foundations. Then, by describing the loading tests of two open-ended and closed-end steel piles, each of which has been performed in two methods, “with displacement" and "without displacement", the effect of end conditions and installation method on the final bearing capacity of the pile is investigated. The soil used in the current paper is silty sand of Firoozkooh. The results of the experiments show that in general the without displacement installation method has a larger bearing capacity in both piles, and in a specific method of installation the closed ended pile shows a slightly higher bearing capacity.

Keywords: physical modeling, frustum confining vessel, pile, bearing capacity, installation method

Procedia PDF Downloads 134
19867 Seismic Fragility Functions of RC Moment Frames Using Incremental Dynamic Analyses

Authors: Seung-Won Lee, JongSoo Lee, Won-Jik Yang, Hyung-Joon Kim

Abstract:

A capacity spectrum method (CSM), one of methodologies to evaluate seismic fragilities of building structures, has been long recognized as the most convenient method, even if it contains several limitations to predict the seismic response of structures of interest. This paper proposes the procedure to estimate seismic fragility curves using an incremental dynamic analysis (IDA) rather than the method adopting a CSM. To achieve the research purpose, this study compares the seismic fragility curves of a 5-story reinforced concrete (RC) moment frame obtained from both methods, an IDA method and a CSM. Both seismic fragility curves are similar in slight and moderate damage states whereas the fragility curve obtained from the IDA method presents less variation (or uncertainties) in extensive and complete damage states. This is due to the fact that the IDA method can properly capture the structural response beyond yielding rather than the CSM and can directly calculate higher mode effects. From these observations, the CSM could overestimate seismic vulnerabilities of the studied structure in extensive or complete damage states.

Keywords: seismic fragility curve, incremental dynamic analysis, capacity spectrum method, reinforced concrete moment frame

Procedia PDF Downloads 408
19866 Low-Level Forced and Ambient Vibration Tests on URM Building Strengthened by Dampers

Authors: Rafik Taleb, Farid Bouriche, Mehdi Boukri, Fouad Kehila

Abstract:

The aim of the paper is to investigate the dynamic behavior of an unreinforced masonry (URM) building strengthened by DC-90 dampers by ambient and low-level forced vibration tests. Ambient and forced vibration techniques are usually applied to reinforced concrete or steel buildings to understand and identify their dynamic behavior, however, less is known about their applicability for masonry buildings. Ambient vibrations were measured before and after strengthening of the URM building by DC-90 dampers system. For forced vibration test, a series of low amplitude steady state harmonic forced vibration tests were conducted after strengthening using eccentric mass shaker. The resonant frequency curves, mode shapes and damping coefficients as well as stress distribution in the steel braces of the DC-90 dampers have been investigated and could be defined. It was shown that the dynamic behavior of the masonry building, even if not regular and with deformable floors, can be effectively represented. It can be concluded that the strengthening of the building does not change the dynamic properties of the building due to the fact of low amplitude excitation which do not activate the dampers.

Keywords: ambient vibrations, masonry buildings, forced vibrations, structural dynamic identification

Procedia PDF Downloads 396
19865 The Significance of Childhood in Shaping Family Microsystems from the Perspective of Biographical Learning: Narratives of Adults

Authors: Kornelia Kordiak

Abstract:

The research is based on a biographical approach and serves as a foundation for understanding individual human destinies through the analysis of the context of life experiences. It focuses on the significance of childhood in shaping family micro-worlds from the perspective of biographical learning. In this case, the family micro-world is interpreted as a complex of beliefs and judgments about elements of the ‘total universe’ based on the individual's experiences. The main aim of the research is to understand the importance of childhood in shaping family micro-worlds from the perspective of reflection on biographical learning. Additionally, it contributes to a deeper understanding of the familial experiences of the studied individuals who form these family micro-worlds and the course of the biographical learning process in the subjects. Biographical research aligns with an interpretative paradigm, where individuals are treated as active interpreters of the world, giving meaning to their experiences and actions based on their own values and beliefs. The research methods used in the project—narrative interview method and analysis of personal documents—enable obtaining a multidimensional perspective on the phenomenon under study. Narrative interviews serve as the main data collection method, allowing researchers to delve into various life contexts of individuals. Analysis of these narratives identifies key moments and life patterns, as well as discovers the significance of childhood in shaping family micro-worlds. Moreover, analysis of personal documents such as diaries or photographs enriches the understanding of the studied phenomena by providing additional contexts and perspectives. The research will be conducted in three phases: preparatory, main, and final. The anticipated schedule includes preparation of research tools, selection of research sample, conducting narrative interviews and analysis of personal documents, as well as analysis and interpretation of collected research material. The narrative interview method and document analysis will be utilized to capture various contexts and interpretations of childhood experiences and family relations. The research will contribute to a better understanding of family dynamics and individual developmental processes. It will allow for the identification and understanding of mechanisms of biographical learning and their significance in shaping identity and family relations. Analysis of adult narratives will enable the identification of factors determining patterns of behavior and attitudes in adult life, which may have significant implications for pedagogical practice.

Keywords: childhood, adulthood, biographical learning, narrative interview, analysis of personal documents, family micro-worlds

Procedia PDF Downloads 13
19864 High Order Block Implicit Multi-Step (Hobim) Methods for the Solution of Stiff Ordinary Differential Equations

Authors: J. P. Chollom, G. M. Kumleng, S. Longwap

Abstract:

The search for higher order A-stable linear multi-step methods has been the interest of many numerical analysts and has been realized through either higher derivatives of the solution or by inserting additional off step points, supper future points and the likes. These methods are suitable for the solution of stiff differential equations which exhibit characteristics that place a severe restriction on the choice of step size. It becomes necessary that only methods with large regions of absolute stability remain suitable for such equations. In this paper, high order block implicit multi-step methods of the hybrid form up to order twelve have been constructed using the multi-step collocation approach by inserting one or more off step points in the multi-step method. The accuracy and stability properties of the new methods are investigated and are shown to yield A-stable methods, a property desirable of methods suitable for the solution of stiff ODE’s. The new High Order Block Implicit Multistep methods used as block integrators are tested on stiff differential systems and the results reveal that the new methods are efficient and compete favourably with the state of the art Matlab ode23 code.

Keywords: block linear multistep methods, high order, implicit, stiff differential equations

Procedia PDF Downloads 341
19863 Approximations of Fractional Derivatives and Its Applications in Solving Non-Linear Fractional Variational Problems

Authors: Harendra Singh, Rajesh Pandey

Abstract:

The paper presents a numerical method based on operational matrix of integration and Ryleigh method for the solution of a class of non-linear fractional variational problems (NLFVPs). Chebyshev first kind polynomials are used for the construction of operational matrix. Using operational matrix and Ryleigh method the NLFVP is converted into a system of non-linear algebraic equations, and solving these equations we obtained approximate solution for NLFVPs. Convergence analysis of the proposed method is provided. Numerical experiment is done to show the applicability of the proposed numerical method. The obtained numerical results are compared with exact solution and solution obtained from Chebyshev third kind. Further the results are shown graphically for different fractional order involved in the problems.

Keywords: non-linear fractional variational problems, Rayleigh-Ritz method, convergence analysis, error analysis

Procedia PDF Downloads 279
19862 Mixture of Polymers and Coating Fullerene Soft Nanoparticles

Authors: L. Bouzina, A. Bensafi, M. Duval, C. Mathis, M. Rawiso

Abstract:

We study the stability and structural properties of mixtures of model nanoparticles and non-adsorbing polymers in the 'protein limit', where the size of polymers exceeds the particle size substantially. We have synthesized in institute (Charles Sadron Strasbourg) model nanoparticles by coating fullerene C60 molecules with low molecular weight polystyrene (PS) chains (6 PS chains with a degree of polymerization close to 25 and 50 are grafted on each fullerene C60 molecule. We will present a Small Angle Neutron scattering (SANS) study of Tetrahydrofuran (THF) solutions involving long polystyrene (PS) chains and fullerene (C60) nanoparticles. Long PS chains and C60 nanoparticles with different arm lengths were synthesized either hydrogenated or deuteriated. They were characterized through Size Exclusion Chromatography (SEC) and Quasielastic Light Scattering (QLS). In this way, the solubility of the C60 nanoparticles in the usual good solvents of PS was controlled. SANS experiments were performed by use of the contrast variation method in order to measure the partial scattering functions related to both components. They allow us to obtain information about the dispersion state of the C60 nanoparticles as well as the average conformation of the long PS chains. Specifically, they show that the addition of long polymer chains leads to the existence of an additional attractive interaction in between soft nanoparticles.

Keywords: fulleren nanoparticles, polymer, small angle neutron scattering, solubility

Procedia PDF Downloads 359
19861 Half-Metallic Ferromagnetism in Ternary Zinc Blende Fe/In0.5Ga0.5 as/in Psuperlattice: First-Principles Study

Authors: N. Berrouachedi, M. Bouslama, S. Rioual, B. Lescop, J. Langlois

Abstract:

Using first-principles calculations within the LSDA (Local Spin Density Approximation) method based on density functional theory (DFT), the electronic structure and magnetic properties of zinc blende Fe/In0.5Ga0.5As/InPsuperlattice are investigated. This compound are found to be half -metallic ferromagnets with a total magnetic moment of 2.25μB per Fe. In addition to this, we reported the DRX measurements of the thick iron sample before and after annealing. One should note, after the annealing treatment at a higher temperature, the disappearance of the peak associated to the Fe(001) plane. In contrast to this report, we observed after the annealing at low temperature the additional peaks attributed to the presence of indium and Fe2As. This suggests a subsequent process consisting in a strong migration of atoms followed with crystallization at the higher temperature.To investigate the origin of magnetism and electronic structure in these zb compounds, we calculated the total and partial DOS of FeInP.One can see that µtotal=4.24µBand µFe=3.27µB in contrast µIn=0.021µB and µP=0.049µB.These results predicted that FeInP compound do belong to the class of zb half metallic HM ferromagnetswith a pseudo gap= 0.93 eVare more promising materials for spintronics devices.

Keywords: zincblend structure, half metallic ferromagnet, spin moments, total and partial DOS, DRX, Wien2k

Procedia PDF Downloads 249
19860 An Approximation Method for Exact Boundary Controllability of Euler-Bernoulli

Authors: A. Khernane, N. Khelil, L. Djerou

Abstract:

The aim of this work is to study the numerical implementation of the Hilbert uniqueness method for the exact boundary controllability of Euler-Bernoulli beam equation. This study may be difficult. This will depend on the problem under consideration (geometry, control, and dimension) and the numerical method used. Knowledge of the asymptotic behaviour of the control governing the system at time T may be useful for its calculation. This idea will be developed in this study. We have characterized as a first step the solution by a minimization principle and proposed secondly a method for its resolution to approximate the control steering the considered system to rest at time T.

Keywords: boundary control, exact controllability, finite difference methods, functional optimization

Procedia PDF Downloads 327
19859 Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method

Authors: Cheng Zhang, James Marco, Walid Allafi, Truong Q. Dinh, W. D. Widanage

Abstract:

Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.

Keywords: electric circuit model, continuous time domain estimation, linear integral filter method, parameter and SOC estimation, recursive least square

Procedia PDF Downloads 362
19858 Robust State feedback Controller for an Active Suspension System

Authors: Hussein Altartouri

Abstract:

The purpose of this paper is to present a modeling and control of the active suspension system using robust state feedback controller implemented for a half car model. This system represents a mechatronic system which contains all the essential components to be considered a complete mechatronic system. This system must adapt different conditions which are difficult to compromise, such as disturbances, slippage, and motion on rough road (that contains rocks, stones, and other miscellanies). Some current automobile suspension systems use passive components only by utilizing spring and damping coefficient with fixed rates. Vehicle suspensions systems are used to provide good road handling and improve passenger comfort. Passive suspensions only offer compromise between these two conflicting criteria. Active suspension poses the ability to reduce the traditional design as a compromise between handling and comfort by directly controlling the suspensions force actuators. In this study, the robust state feedback controller implemented to the active suspensions system for half car model.

Keywords: half-car model, active suspension system, state feedback, road profile

Procedia PDF Downloads 376
19857 Assessment of Work-Related Stress and Its Predictors in Ethiopian Federal Bureau of Investigation in Addis Ababa

Authors: Zelalem Markos Borko

Abstract:

Work-related stress is a reaction that occurs when the work weight progress toward becoming excessive. Therefore, unless properly managed, stress leads to high employee turnover, decreased performance, illness and absenteeism. Yet, little has been addressed regarding work-related stress and its predictors in the study area. Therefore, the objective of this study was to assess stress prevalence and its predictors in the study area. To that effect, a cross-sectional study design was conducted on 281 employees from the Ethiopian Federal Bureau of Investigation by using stratified random sampling techniques. Survey questionnaire scales were employed to collect data. Data were analyzed by percentage, Pearson correlation coefficients, simple linear regression, multiple linear regressions, independent t-test and one-way ANOVA statistical techniques. In the present study13.9% of participants faced high stress, whereas 13.5% of participants faced low stress and the rest 72.6% of officers experienced moderate stress. There is no significant group difference among workers due to age, gender, marital status, educational level, years of service and police rank. This study concludes that factors such as role conflict, performance over-utilization, role ambiguity, and qualitative and quantitative role overload together predict 39.6% of work-related stress. This indicates that 60.4% of the variation in stress is explained by other factors, so other additional research should be done to identify additional factors predicting stress. To prevent occupational stress among police, the Ethiopian Federal Bureau of Investigation should develop strategies based on factors that will help to develop stress reduction management.

Keywords: work-related stress, Ethiopian federal bureau of investigation, predictors, Addis Ababa

Procedia PDF Downloads 50
19856 Numerical Investigation of Embankment Settlement Improved by Method of Preloading by Vertical Drains

Authors: Seyed Abolhasan Naeini, Saeideh Mohammadi

Abstract:

Time dependent settlement due to loading on soft saturated soils produces many problems such as high consolidation settlements and low consolidation rates. Also, long term consolidation settlement of soft soil underlying the embankment leads to unpredicted settlements and cracks on soil surface. Preloading method is an effective improvement method to solve this problem. Using vertical drains in preloading method is an effective method for improving soft soils. Applying deep soil mixing method on soft soils is another effective method for improving soft soils. There are little studies on using two methods of preloading and deep soil mixing simultaneously. In this paper, the concurrent effect of preloading with deep soil mixing by vertical drains is investigated through a finite element code, Plaxis2D. The influence of parameters such as deep soil mixing columns spacing, existence of vertical drains and distance between them, on settlement and stability factor of safety of embankment embedded on soft soil is investigated in this research.

Keywords: preloading, soft soil, vertical drains, deep soil mixing, consolidation settlement

Procedia PDF Downloads 199
19855 A Simulation-Based Method for Evaluation of Energy System Cooperation between Pulp and Paper Mills and a District Heating System: A Case Study

Authors: Alexander Hedlund, Anna-Karin Stengard, Olof Björkqvist

Abstract:

A step towards reducing greenhouse gases and energy consumption is to collaborate with the energy system between several industries. This work is based on a case study on integration of pulp and paper mills with a district heating system in Sundsvall, Sweden. Present research shows that it is possible to make a significant reduction in the electricity demand in the mechanical pulping process. However, the profitability of the efficiency measures could be an issue, as the excess steam recovered from the refiners decreases with the electricity consumption. A consequence will be that the fuel demand for steam production will increase. If the fuel price is similar to the electricity price it would reduce the profit of such a project. If the paper mill can be integrated with a district heating system, it is possible to upgrade excess heat from a nearby kraft pulp mill to process steam via the district heating system in order to avoid the additional fuel need. The concept is investigated by using a simulation model describing both the mass and energy balance as well as the operating margin. Three scenarios were analyzed: reference, electricity reduction and energy substitution. The simulation show that the total input to the system is lowest in the Energy substitution scenario. Additionally, in the Energy substitution scenario the steam from the incineration boiler covers not only the steam shortage but also a part of the steam produced using the biofuel boiler, the cooling tower connected to the incineration boiler is no longer needed and the excess heat can cover the whole district heating load during the whole year. The study shows a substantial economic advantage if all stakeholders act together as one system. However, costs and benefits are unequally shared between the actors. This means that there is a need for new business models in order to share the system costs and benefits.

Keywords: energy system, cooperation, simulation method, excess heat, district heating

Procedia PDF Downloads 214
19854 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network

Authors: Abdolreza Memari

Abstract:

In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.

Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model

Procedia PDF Downloads 482