Search results for: mixed effects models
16346 A Greener Approach for the Recovery of Proteins from Meat Industries
Authors: Jesus Hernandez, Zead Elzoeiry, Md. S. Islam, Abel E. Navarro
Abstract:
The adsorption of bovine serum albumin (BSA) and human hemoglobin (Hb) on naturally-occurring adsorbents was studied to evaluate the potential recovery of proteins from meat industry residues. Spent peppermint tea (PM), powdered purple corn cob (PC), natural clay (NC) and chemically-modified clay (MC) were investigated to elucidate the effects of pH, adsorbent dose, initial protein concentration, presence of salts and heavy metals. Equilibrium data were fitted according to isotherm models, reporting a maximum adsorption capacity at pH 8 of 318 and 344 mg BSA/g of PM and NC, respectively. Moreover, Hb displayed maximum adsorption capacity at pH 5 of 125 and 143 mg/g of PM and PC, respectively. Hofmeister salt effect was only observed for PM/Hb system. Salts tend to decrease protein adsorption, and the presence of Cu(II) ions had negligible impacts on the adsorption onto NC and PC. Desorption experiments confirmed that more than 85% of both proteins can be recovered with diluted acids and bases. SEM, EDX, and TGA analyses demonstrated that the adsorbents have favorable morphological and mechanical properties. The long-term goal of this study aims to recover soluble proteins from industrial wastewaters to produce animal food or any protein-based product.Keywords: adsorption, albumin, clay, hemoglobin, spent peppermint leaf
Procedia PDF Downloads 10316345 A Multi-Release Software Reliability Growth Models Incorporating Imperfect Debugging and Change-Point under the Simulated Testing Environment and Software Release Time
Authors: Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar
Abstract:
The testing process of the software during the software development time is a crucial step as it makes the software more efficient and dependable. To estimate software’s reliability through the mean value function, many software reliability growth models (SRGMs) were developed under the assumption that operating and testing environments are the same. Practically, it is not true because when the software works in a natural field environment, the reliability of the software differs. This article discussed an SRGM comprising change-point and imperfect debugging in a simulated testing environment. Later on, we extended it in a multi-release direction. Initially, the software was released to the market with few features. According to the market’s demand, the software company upgraded the current version by adding new features as time passed. Therefore, we have proposed a generalized multi-release SRGM where change-point and imperfect debugging concepts have been addressed in a simulated testing environment. The failure-increasing rate concept has been adopted to determine the change point for each software release. Based on nine goodness-of-fit criteria, the proposed model is validated on two real datasets. The results demonstrate that the proposed model fits the datasets better. We have also discussed the optimal release time of the software through a cost model by assuming that the testing and debugging costs are time-dependent.Keywords: software reliability growth models, non-homogeneous Poisson process, multi-release software, mean value function, change-point, environmental factors
Procedia PDF Downloads 7416344 Family Values and Honest Attitudes in Pakistan: The Role of Tolerance and Justice Attitudes
Authors: Muhammad Shoaib
Abstract:
The aim of the study is to examine the effects of family values on honest attitudes by the mediation of tolerance attitudes and justice attitudes among family members. As many other developing settings, Pakistani society is undergoing a rapid and multifaceted social changes, in which traditional thinking coexists and often clashes with modern thinking. Family values have great effects on the honest attitudes among family members as well as all the members of Pakistani society. Tolerance attitudes, justice attitudes, personal experiences and modernity factors are contributing to the development of honest attitudes among family members. Family values attitudes enhance the concept of honesty feelings, fairness, and less thinking towards theft. For the present study 520 respondents were sampled from two urban areas of Punjab province; Lahore and Faisalabad, through proportionate random sampling technique. A survey method was used as a technique of data collection and an interview schedule was administered to collect information from the respondents. The results shows similar positive effects of tolerance and justice attitudes on honest attitude by the mediation of family values attitudes.Keywords: family values, tolerance, justice, honesty, attitudes, Pakistan
Procedia PDF Downloads 44616343 Catalytic Soot Gasification in Single and Mixed Atmospheres of CO2 and H2O in the Presence of CO and H2
Authors: Yeidy Sorani Montenegro Camacho, Samir Bensaid, Nunzio Russo, Debora Fino
Abstract:
LiFeO2 nano-powders were prepared via solution combustion synthesis (SCS) method and were used as carbon gasification catalyst in a reduced atmosphere. The gasification of soot with CO2 and H2O in the presence of CO and H2 (syngas atmosphere) were also investigated under atmospheric conditions using a fixed-bed micro-reactor placed in an electric, PID-regulated oven. The catalytic bed was composed of 150 mg of inert silica, 45 mg of carbon (Printex-U) and 5 mg of catalyst. The bed was prepared by ball milling the mixture at 240 rpm for 15 min to get an intimate contact between the catalyst and soot. A Gas Hourly Space Velocity (GHSV) of 38.000 h-1 was used for the tests campaign. The furnace was heated up to the desired temperature, a flow of 120 mL/min was sent into the system and at the same time the concentrations of CO, CO2 and H2 were recorded at the reactor outlet using an EMERSON X-STREAM XEGP analyzer. Catalytic and non-catalytic soot gasification reactions were studied in a temperature range of 120°C – 850°C with a heating rate of 5 °C/min (non-isothermal case) and at 650°C for 40 minutes (isothermal case). Experimental results show that the gasification of soot with H2O and CO2 are inhibited by the H2 and CO, respectively. The soot conversion at 650°C decreases from 70.2% to 31.6% when the CO is present in the feed. Besides, the soot conversion was 73.1% and 48.6% for H2O-soot and H2O-H2-soot gasification reactions, respectively. Also, it was observed that the carbon gasification in mixed atmosphere, i.e., when simultaneous carbon gasification with CO2 and steam take place, with H2 and CO as co-reagents; the gasification reaction is strongly inhibited by CO and H2, as well has been observed in single atmospheres for the isothermal and non-isothermal reactions. Further, it has been observed that when CO2 and H2O react with carbon at the same time, there is a passive cooperation of steam and carbon dioxide in the gasification reaction, this means that the two gases operate on separate active sites without influencing each other. Finally, despite the extreme reduced operating conditions, it has been demonstrated that the 32.9% of the initial carbon was gasified using LiFeO2-catalyst, while in the non-catalytic case only 8% of the soot was gasified at 650°C.Keywords: soot gasification, nanostructured catalyst, reducing environment, syngas
Procedia PDF Downloads 26116342 Designing Agile Product Development Processes by Transferring Mechanisms of Action Used in Agile Software Development
Authors: Guenther Schuh, Michael Riesener, Jan Kantelberg
Abstract:
Due to the fugacity of markets and the reduction of product lifecycles, manufacturing companies from high-wage countries are nowadays faced with the challenge to place more innovative products within even shorter development time on the market. At the same time, volatile customer requirements have to be satisfied in order to successfully differentiate from market competitors. One potential approach to address the explained challenges is provided by agile values and principles. These agile values and principles already proofed their success within software development projects in the form of management frameworks like Scrum or concrete procedure models such as Extreme Programming or Crystal Clear. Those models lead to significant improvements regarding quality, costs and development time and are therefore used within most software development projects. Motivated by the success within the software industry, manufacturing companies have tried to transfer agile mechanisms of action to the development of hardware products ever since. Though first empirical studies show similar effects in the agile development of hardware products, no comprehensive procedure model for the design of development iterations has been developed for hardware development yet due to different constraints of the domains. For this reason, this paper focusses on the design of agile product development processes by transferring mechanisms of action used in agile software development towards product development. This is conducted by decomposing the individual systems 'product development' and 'agile software development' into relevant elements and symbiotically composing the elements of both systems in respect of the design of agile product development processes afterwards. In a first step, existing product development processes are described following existing approaches of the system theory. By analyzing existing case studies from industrial companies as well as academic approaches, characteristic objectives, activities and artefacts are identified within a target-, action- and object-system. In partial model two, mechanisms of action are derived from existing procedure models of agile software development. These mechanisms of action are classified in a superior strategy level, in a system level comprising characteristic, domain-independent activities and their cause-effect relationships as well as in an activity-based element level. Within partial model three, the influence of the identified agile mechanism of action towards the characteristic system elements of product development processes is analyzed. For this reason, target-, action- and object-system of the product development are compared with the strategy-, system- and element-level of agile mechanism of action by using the graph theory. Furthermore, the necessity of existence of activities within iteration can be determined by defining activity-specific degrees of freedom. Based on this analysis, agile product development processes are designed in form of different types of iterations within a last step. By defining iteration-differentiating characteristics and their interdependencies, a logic for the configuration of activities, their form of execution as well as relevant artefacts for the specific iteration is developed. Furthermore, characteristic types of iteration for the agile product development are identified.Keywords: activity-based process model, agile mechanisms of action, agile product development, degrees of freedom
Procedia PDF Downloads 20716341 Non-Linear Assessment of Chromatographic Lipophilicity and Model Ranking of Newly Synthesized Steroid Derivatives
Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Anamarija Mandic, Katarina Penov Gasi, Marija Sakac, Aleksandar Okljesa, Andrea Nikolic
Abstract:
The present paper deals with chromatographic lipophilicity prediction of newly synthesized steroid derivatives. The prediction was achieved using in silico generated molecular descriptors and quantitative structure-retention relationship (QSRR) methodology with the artificial neural networks (ANN) approach. Chromatographic lipophilicity of the investigated compounds was expressed as retention factor value logk. For QSRR modeling, a feedforward back-propagation ANN with gradient descent learning algorithm was applied. Using the novel sum of ranking differences (SRD) method generated ANN models were ranked. The aim was to distinguish the most consistent QSRR model that can be found, and similarity or dissimilarity between the models that could be noticed. In this study, SRD was performed with average values of retention factor value logk as reference values. An excellent correlation between experimentally observed retention factor value logk and values predicted by the ANN was obtained with a correlation coefficient higher than 0.9890. Statistical results show that the established ANN models can be applied for required purpose. This article is based upon work from COST Action (TD1305), supported by COST (European Cooperation in Science and Technology).Keywords: artificial neural networks, liquid chromatography, molecular descriptors, steroids, sum of ranking differences
Procedia PDF Downloads 31916340 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 10616339 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 3416338 Equilibrium and Kinetic Studies of Lead Adsorption on Activated Carbon Derived from Mangrove Propagule Waste by Phosphoric Acid Activation
Authors: Widi Astuti, Rizki Agus Hermawan, Hariono Mukti, Nurul Retno Sugiyono
Abstract:
The removal of lead ion (Pb2+) from aqueous solution by activated carbon with phosphoric acid activation employing mangrove propagule as precursor was investigated in a batch adsorption system. Batch studies were carried out to address various experimental parameters including pH and contact time. The Langmuir and Freundlich models were able to describe the adsorption equilibrium, while the pseudo first order and pseudo second order models were used to describe kinetic process of Pb2+ adsorption. The results show that the adsorption data are seen in accordance with Langmuir isotherm model and pseudo-second order kinetic model.Keywords: activated carbon, adsorption, equilibrium, kinetic, lead, mangrove propagule
Procedia PDF Downloads 16716337 Comparative Analysis of the Treatment of Okra Seed and Soy Beans Oil with Crude Enzyme Extract from Malted Rice
Authors: Eduzor Esther, Uhiara Ngozi, Ya’u Abubakar Umar, Anayo Jacob Gabriel, Umar Ahmed
Abstract:
The study investigated the characteristic effect of treating okra seed and soybeans seed oil with crude enzymes extract from malted rice. The oils from okra seeds and soybeans were obtained by solvent extraction method using N-hexane solvent. Soybeans seeds had higher percentage oil yield than okra seed. 250ml of each oil was thoroughly mixed with 5ml of the malted rice extract at 400C for 5mins and then filtered and regarded as treated oil while another batch of 250ml of each oil was not mixed with the malted rice extract and regarded as untreated oil. All the oils were analyzed for specific gravity, refractive index, emulsification capacity, absortivity, TSS and viscosity. Treated okra seed and soybeans oil gave higher values for specific gravity, than the untreated oil for okra seed and soybeans oil respectively. The emulsification capacity values were also higher for treated oils, when compared to the untreated oil, for okra seed and soybeans oil respectively. Treated okra seed and soybeans oil also had higher range of values for absorptivity, than the untreated oil for okra seed and soybeans respectively. The ranges of T.S.S values of the treated oil were also higher, than those of the untreated oil for okra seed and soybeans respectively. The results of viscosity showed that the treated oil had higher values, than the untreated oil for okra seed and soybeans oil respectively. However, the results of refractive index showed that the untreated oils had higher values ranges of than the treated oils for okra seed and soybeans respectively. Treated oil show better quality in respect to the parameters analyst, except the refractive index which is slightly less but also is within the rangiest of standard, the oils are high in unsaturation especially okra oil when compared with soya beans oil. It is recommended that, treated oil of okra seeds and soya beans can serve better than many oils that presently in use such as ground nut oil, palm oil and cotton seeds oil.Keywords: extract, malted, oil, okra, rice, seed, soybeans
Procedia PDF Downloads 44316336 The Effects of Transformational Leadership on Process Innovation through Knowledge Sharing
Authors: Sawsan J. Al-Husseini, Talib A. Dosa
Abstract:
Transformational leadership has been identified as the most important factor affecting innovation and knowledge sharing; it leads to increased goal-directed behavior exhibited by followers and thus to enhanced performance and innovation for the organization. However, there is a lack of models linking transformational leadership, knowledge sharing, and process innovation within higher education (HE) institutions in general within developing countries, particularly in Iraq. This research aims to examine the mediating role of knowledge sharing in the transformational leadership and process innovation relationship. A quantitative approach was taken and 254 usable questionnaires were collected from public HE institutions in Iraq. Structural equation modelling with AMOS 22 was used to analyze the causal relationships among factors. The research found that knowledge sharing plays a pivotal role in the relationship between transformational leadership and process innovation, and that transformational leadership would be ideal in an educational context, promoting knowledge sharing activities and influencing process innovation in the public HE in Iraq. The research has developed some guidelines for researchers as well as leaders and provided evidence to support the use of TL to increase process innovation within HE environment in developing countries, particularly in Iraq.Keywords: transformational leadership, knowledge sharing, process innovation, structural equation modelling, developing countries
Procedia PDF Downloads 33616335 Implementation of Lean Production in Business Enterprises: A Literature-Based Content Analysis of Implementation Procedures
Authors: P. Pötters, A. Marquet, B. Leyendecker
Abstract:
The objective of this paper is to investigate different implementation approaches for the implementation of Lean production in companies. Furthermore, a structured overview of those different approaches is to be made. Therefore, the present work is intended to answer the following research question: What differences and similarities exist between the various systematic approaches and phase models for the implementation of Lean Production? To present various approaches for the implementation of Lean Production discussed in the literature, a qualitative content analysis was conducted. Within the framework of a qualitative survey, a selection of texts dealing with lean production and its introduction was examined. The analysis presents different implementation approaches from the literature, covering the descriptive aspect of the study. The study also provides insights into similarities and differences among the implementation approaches, which are drawn from the analysis of latent text contents and author interpretations. In this study, the focus is on identifying differences and similarities among systemic approaches for implementing Lean Production. The research question takes into account the main object of consideration, objectives pursued, starting point, procedure, and endpoint of the implementation approach. The study defines the concept of Lean Production and presents various approaches described in literature that companies can use to implement Lean Production successfully. The study distinguishes between five systemic implementation approaches and seven phase models to help companies choose the most suitable approach for their implementation project. The findings of this study can contribute to enhancing transparency regarding the existing approaches for implementing Lean Production. This can enable companies to compare and contrast the available implementation approaches and choose the most suitable one for their specific project.Keywords: implementation, lean production, phase models, systematic approaches
Procedia PDF Downloads 10416334 The Effects of Partial Replacement with Sewage Sludge, Calcined Clay, and Waste Marble Powder on Cement Paste Properties
Authors: Abdul Rahim Al Umairi, Hamed Al Kindi
Abstract:
The cement production process significantly contributes to greenhouse gas emissions, accounting for 25% of total industrial emissions. This study systematically examined new, underutilized materials—sewage sludge ash (SSA), marble waste (MW), and calcined clay (CC)—to evaluate their effects when partially replacing white Portland cement (WPC) in cement paste formulations. Various replacement proportions (10%, 20%, and 30%) were tested, along with different treatment temperatures (600°C, 630°C, 730°C, and 850°C) for SSA and CC. To gain a deeper understanding of the resulting materials, analyses such as XRF, XRD, and SEM were conducted. The highest compressive strength recorded for the 28-day cured cement paste was 91 MPa when 20% SSA (treated at 600°C) was used, compared to just 53 MPa for the control sample. Conversely, CC exhibited minimal enhancement in compressive strength, while MW had detrimental effects. Additionally, replacing WPC with SSA and CC at 9% and 21% resulted in slight improvements in compressive strength. This research highlights the potential of utilizing underexploited materials like SSA to improve the mechanical and chemical properties of cement paste, indicating that further investigation is necessary to enhance environmental sustainability.Keywords: sewage sludge ash, calcined clay, marble waste, cement
Procedia PDF Downloads 2016333 The Effects of Self-Efficacy on Challenge and Threat States
Authors: Nadine Sammy, Mark Wilson, Samuel Vine
Abstract:
The Theory of Challenge and Threat States in Athletes (TCTSA) states that self-efficacy is an antecedent of challenge and threat. These states result from conscious and unconscious evaluations of situational demands and personal resources and are represented by both cognitive and physiological markers. Challenge is considered a more adaptive stress response as it is associated with a more efficient cardiovascular profile, as well as better performance and attention effects compared with threat. Self-efficacy is proposed to influence challenge/threat because an individual’s belief that they have the skills necessary to execute the courses of action required to succeed contributes to a perception that they can cope with the demands of the situation. This study experimentally examined the effects of self-efficacy on cardiovascular responses (challenge and threat), demand and resource evaluations, performance and attention under pressurised conditions. Forty-five university students were randomly assigned to either a control (n=15), low self-efficacy (n=15) or high self-efficacy (n=15) group and completed baseline and pressurised golf putting tasks. Self-efficacy was manipulated using false feedback adapted from previous studies. Measures of self-efficacy, cardiovascular reactivity, demand and resource evaluations, task performance and attention were recorded. The high self-efficacy group displayed more favourable cardiovascular reactivity, indicative of a challenge state, compared with the low self-efficacy group. The former group also reported high resource evaluations, but no task performance or attention effects were detected. These findings demonstrate that levels of self-efficacy influence cardiovascular reactivity and perceptions of resources under pressurised conditions.Keywords: cardiovascular, challenge, performance, threat
Procedia PDF Downloads 23216332 Seismic Microzonation of El-Fayoum New City, Egypt
Authors: Suzan Salem, Heba Moustafa, Abd El-Aziz Abd El-Aal
Abstract:
Seismic micro hazard zonation for urban areas is the first step towards a seismic risk analysis and mitigation strategy. Essential here is to obtain a proper understanding of the local subsurface conditions and to evaluate ground-shaking effects. In the present study, an attempt has been made to evaluate the seismic hazard considering local site effects by carrying out detailed geotechnical and geophysical site characterization in El-Fayoum New City. Seismic hazard analysis and microzonation of El-Fayoum New City are addressed in three parts: in the first part, estimation of seismic hazard is done using seismotectonic and geological information. The second part deals with site characterization using geotechnical and shallow geophysical techniques. In the last part, local site effects are assessed by carrying out one-dimensional (1-D) ground response analysis using the equivalent linear method by program SHAKE 2000. Finally, microzonation maps have been prepared. The detailed methodology, along with experimental details, collected data, results and maps are presented in this paper.Keywords: El-Fayoum, microzonation, seismotectonic, Egypt
Procedia PDF Downloads 38116331 Building Energy Modeling for Networks of Data Centers
Authors: Eric Kumar, Erica Cochran, Zhiang Zhang, Wei Liang, Ronak Mody
Abstract:
The objective of this article was to create a modelling framework that exposes the marginal costs of shifting workloads across geographically distributed data-centers. Geographical distribution of internet services helps to optimize their performance for localized end users with lowered communications times and increased availability. However, due to the geographical and temporal effects, the physical embodiments of a service's data center infrastructure can vary greatly. In this work, we first identify that the sources of variances in the physical infrastructure primarily stem from local weather conditions, specific user traffic profiles, energy sources, and the types of IT hardware available at the time of deployment. Second, we create a traffic simulator that indicates the IT load at each data-center in the set as an approximator for user traffic profiles. Third, we implement a framework that quantifies the global level energy demands using building energy models and the traffic profiles. The results of the model provide a time series of energy demands that can be used for further life cycle analysis of internet services.Keywords: data-centers, energy, life cycle, network simulation
Procedia PDF Downloads 14716330 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges
Authors: Dianelys Vega, Carlos Magluta, Ney Roitman
Abstract:
The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction
Procedia PDF Downloads 13216329 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 11416328 Research on Residential Block Fabric: A Case Study of Hangzhou West Area
Abstract:
Residential block construction of big cities in China began in the 1950s, and four models had far-reaching influence on modern residential block in its development process, including unit compound and residential district in 1950s to 1980s, and gated community and open community in 1990s to now. Based on analysis of the four models’ fabric, the article takes residential blocks in Hangzhou west area as an example and carries on the studies from urban structure level and block special level, mainly including urban road network, land use, community function, road organization, public space and building fabric. At last, the article puts forward semi-open sub-community strategy to improve the current fabric.Keywords: Hangzhou west area, residential block model, residential block fabric, semi-open sub-community strategy
Procedia PDF Downloads 41716327 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 4416326 Removal of Xylenol Orange and Eriochrome Black T Dyes from Aqueous Solution Using Chemically Activated Cocos nucifera and Mango Seed
Authors: Padmesh Tirunelveli Narayanapillai, Joel Sharwinkumar, Gaitri Saravanan
Abstract:
The biosorption of Xylenol Orange (XO) and Eriochrome Black T (EBT) from aqueous solutions by chemically activated Cocos nucifera and mango seed as a low-cost, natural, and eco-friendly biosorbents was investigated. The study for biosorption of XO and EBT was optimized by different experimental parameters, initial pH 2–7, temperature 30–60 °C, biosorbent dosage 0.1 – 0.5 g, and XO: EBT dye proportions 0 – 100 by weight %. Physicochemical characteristic studies were conducted by Fourier Transform Infrared (FTIR). The equilibrium uptake was increased with an increase in the initial dye concentrations in the solution. Biosorption kinetic data were properly fitted with the pseudo-second-order kinetic model. The experimental isotherms data were analyzed using Langmuir, Freundlich, Redlich-Peterson, and Toth isotherm equations. Thermodynamic parameters ∆Go, ∆Ho, and ∆So were calculated indicating that the biosorption of Xo and EBT dye is a spontaneous and endothermic process. The Langmuir model gave the best fit by higher correlation coefficient (R2 =0.9971) for both biosorbents at optimum circumstances as pH 3, temperature 30°C, dosage 0.5 g for chemically activated Cocos nucifera and 0.4 g for chemically activated mango seeds it assumes as monolayer adsorption. The maximum dye removal efficiency was determined as 79.75% with chemically activated mango seeds compared to chemically activated Cocos nucifera. In summary, this research work showed that chemically modified activated mango seed can be effectively used as a promising low-cost biosorbent for the removal of different XO and EBT mixed dye combinations from aqueous solutions.Keywords: mixed dye proportions, xylenol orange and eriochrome black t, chemically activated cocos nucifera and mango seed, kinetic, isotherm and thermodynamic studies, FTIR
Procedia PDF Downloads 7116325 Debriefing Practices and Models: An Integrative Review
Authors: Judson P. LaGrone
Abstract:
Simulation-based education in curricula was once a luxurious component of nursing programs but now serves as a vital element of an individual’s learning experience. A debriefing occurs after the simulation scenario or clinical experience is completed to allow the instructor(s) or trained professional(s) to act as a debriefer to guide a reflection with a purpose of acknowledging, assessing, and synthesizing the thought process, decision-making process, and actions/behaviors performed during the scenario or clinical experience. Debriefing is a vital component of the simulation process and educational experience to allow the learner(s) to progressively build upon past experiences and current scenarios within a safe and welcoming environment with a guided dialog to enhance future practice. The aim of this integrative review was to assess current practices of debriefing models in simulation-based education for health care professionals and students. The following databases were utilized for the search: CINAHL Plus, Cochrane Database of Systemic Reviews, EBSCO (ERIC), PsycINFO (Ovid), and Google Scholar. The advanced search option was useful to narrow down the search of articles (full text, Boolean operators, English language, peer-reviewed, published in the past five years). Key terms included debrief, debriefing, debriefing model, debriefing intervention, psychological debriefing, simulation, simulation-based education, simulation pedagogy, health care professional, nursing student, and learning process. Included studies focus on debriefing after clinical scenarios of nursing students, medical students, and interprofessional teams conducted between 2015 and 2020. Common themes were identified after the analysis of articles matching the search criteria. Several debriefing models are addressed in the literature with similarities of effectiveness for participants in clinical simulation-based pedagogy. Themes identified included (a) importance of debriefing in simulation-based pedagogy, (b) environment for which debriefing takes place is an important consideration, (c) individuals who should conduct the debrief, (d) length of debrief, and (e) methodology of the debrief. Debriefing models supported by theoretical frameworks and facilitated by trained staff are vital for a successful debriefing experience. Models differed from self-debriefing, facilitator-led debriefing, video-assisted debriefing, rapid cycle deliberate practice, and reflective debriefing. A reoccurring finding was centered around the emphasis of continued research for systematic tool development and analysis of the validity and effectiveness of current debriefing practices. There is a lack of consistency of debriefing models among nursing curriculum with an increasing rate of ill-prepared faculty to facilitate the debriefing phase of the simulation.Keywords: debriefing model, debriefing intervention, health care professional, simulation-based education
Procedia PDF Downloads 14216324 Rethinking the Use of Online Dispute Resolution in Resolving Cross-Border Small E-Disputes in EU
Authors: Sajedeh Salehi, Marco Giacalone
Abstract:
This paper examines the role of existing online dispute resolution (ODR) mechanisms and their effects on ameliorating access to justice – as a protected right by Art. 47 of the EU Charter of Fundamental Rights – for consumers in EU. The major focus of this study will be on evaluating ODR as the means of dispute resolution for Business-to-Consumer (B2C) cross-border small claims raised in e-commerce transactions. The authors will elaborate the consequences of implementing ODR methods in the context of recent developments in EU regulatory safeguards on promoting consumer protection. In this analysis, both non-judiciary and judiciary ODR redress mechanisms are considered, however, the significant consideration is given to – obligatory and non-obligatory – judiciary ODR methods. For that purpose, this paper will particularly investigate the impact of the EU ODR platform as well as the European Small Claims Procedure (ESCP) Regulation 861/2007 and their role on accelerating the access to justice for consumers in B2C e-disputes. Although, considerable volume of research has been carried out on ODR for consumer claims, rather less (or no-) attention has been paid to provide a combined doctrinal and empirical evaluation of ODR’s potential in resolving cross-border small e-disputes, in EU. Hence, the methodological approach taken in this study is a mixed methodology based on qualitative (interviews) and quantitative (surveys) research methods which will be mainly based on the data acquired through the findings of the Small Claims Analysis Net (SCAN) project. This project contributes towards examining the ESCP Regulation implementation and efficiency in providing consumers with a legal watershed through using the ODR for their transnational small claims. The outcomes of this research may benefit both academia and policymakers at national and international level.Keywords: access to justice, consumers, e-commerce, small e-Disputes
Procedia PDF Downloads 12816323 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting
Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero
Abstract:
In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling
Procedia PDF Downloads 13516322 Machine Learning Approaches to Water Usage Prediction in Kocaeli: A Comparative Study
Authors: Kasim Görenekli, Ali Gülbağ
Abstract:
This study presents a comprehensive analysis of water consumption patterns in Kocaeli province, Turkey, utilizing various machine learning approaches. We analyzed data from 5,000 water subscribers across residential, commercial, and official categories over an 80-month period from January 2016 to August 2022, resulting in a total of 400,000 records. The dataset encompasses water consumption records, weather information, weekends and holidays, previous months' consumption, and the influence of the COVID-19 pandemic.We implemented and compared several machine learning models, including Linear Regression, Random Forest, Support Vector Regression (SVR), XGBoost, Artificial Neural Networks (ANN), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU). Particle Swarm Optimization (PSO) was applied to optimize hyperparameters for all models.Our results demonstrate varying performance across subscriber types and models. For official subscribers, Random Forest achieved the highest R² of 0.699 with PSO optimization. For commercial subscribers, Linear Regression performed best with an R² of 0.730 with PSO. Residential water usage proved more challenging to predict, with XGBoost achieving the highest R² of 0.572 with PSO.The study identified key factors influencing water consumption, with previous months' consumption, meter diameter, and weather conditions being among the most significant predictors. The impact of the COVID-19 pandemic on consumption patterns was also observed, particularly in residential usage.This research provides valuable insights for effective water resource management in Kocaeli and similar regions, considering Turkey's high water loss rate and below-average per capita water supply. The comparative analysis of different machine learning approaches offers a comprehensive framework for selecting appropriate models for water consumption prediction in urban settings.Keywords: mMachine learning, water consumption prediction, particle swarm optimization, COVID-19, water resource management
Procedia PDF Downloads 1516321 A Review on Application of Waste Tire in Concrete
Authors: M. A. Yazdi, J. Yang, L. Yihui, H. Su
Abstract:
The application of recycle waste tires into civil engineering practices, namely asphalt paving mixtures and cementbased materials has been gaining ground across the world. This review summarizes and compares the recent achievements in the area of plain rubberized concrete (PRC), in details. Different treatment methods have been discussed to improve the performance of rubberized Portland cement concrete. The review also includes the effects of size and amount of tire rubbers on mechanical and durability properties of PRC. The microstructure behaviour of the rubberized concrete was detailed.Keywords: waste rubber aggregates, microstructure, treatment methods, size and content effects
Procedia PDF Downloads 33216320 Employment Mobility and the Effects of Wage Level and Tenure
Authors: Idit Kalisher, Israel Luski
Abstract:
One result of the growing dynamicity of labor markets in recent decades is a wider scope of employment mobility – i.e., transitions between employers, either within or between careers. Employment mobility decisions are primarily affected by the current employment status of the worker, which is reflected in wage and tenure. Using 34,328 observations from the National Longitudinal Survey of Youth 1979 (NLS79), which were derived from the USA population between 1990 and 2012, this paper aims to investigate the effects of wage and tenure over employment mobility choices, and additionally to examine the effects of other personal characteristics, individual labor market characteristics and macroeconomic factors. The estimation strategy was designed to address two challenges that arise from the combination of the model and the data: (a) endogeneity of the wage and the tenure in the choice equation; and (b) unobserved heterogeneity, as the data of this research is longitudinal. To address (a), estimation was performed using two-stage limited dependent variable procedure (2SLDV); and to address (b), the second stage was estimated using femlogit – an implementation of the multinomial logit model with fixed effects. Among workers who have experienced at least one turnover, the wage was found to have a main effect on career turnover likelihood of all workers, whereas the wage effect on job turnover likelihood was found to be dependent on individual characteristics. The wage was found to negatively affect the turnover likelihood and the effect was found to vary across wage level: high-wage workers were more affected compared to low-wage workers. Tenure was found to have a main positive effect on both turnover types’ likelihoods, though the effect was moderated by the wage. The findings also reveal that as their wage increases, women are more likely to turnover than men, and academically educated workers are more likely to turnover within careers. Minorities were found to be as likely as Caucasians to turnover post wage-increase, but less likely to turnover with each additional tenure year. The wage and the tenure effects were found to vary also between careers. The difference in attitude towards money, labor market opportunities and risk aversion could explain these findings. Additionally, the likelihood of a turnover was found to be affected by previous unemployment spells, age, and other labor market and personal characteristics. The results of this research could assist policymakers as well as business owners and employers. The former may be able to encourage women and older workers’ employment by considering the effects of gender and age on the probability of a turnover, and the latter may be able to assess their employees’ likelihood of a turnover by considering the effects of their personal characteristics.Keywords: employment mobility, endogeneity, femlogit, turnover
Procedia PDF Downloads 15116319 Metabolic and Phylogenetic Profiling of Rhizobium leguminosarum Strains Isolated from NZ Soils of Varying pH
Authors: Anish Shah, Steve A. Wakelin, Derrick Moot, Aurélie Laugraud, Hayley J. Ridgway
Abstract:
A mixed pasture system of ryegrass-clover is used in New Zealand, where clovers are generally inoculated with commercially available strains of rhizobia. The community of rhizobia living in the soil and the way in which they interact with the plant are affected by different biotic and abiotic factors. In general, bacterial richness and diversity in soil varies by soil pH. pH also affects cell physiology and acts as a master variable that controls the wider soil physiochemical conditions such as P availability, Al release and micronutrient availability. As such, pH can have both primary and secondary effects on soil biology and processes. The aim of this work was to investigate the effect of soil pH on the genetic diversity and metabolic profile of Rhizobium leguminosarum strains nodulating clover. Soils were collected from 12 farms across New Zealand which had a pH(water) range of between 4.9 and 7.5, with four acidic (pH 4.9 – 5.5), four ‘neutral’ (5.8 – 6.1) and four alkaline (6.5 – 7.5) soils. Bacteria were recovered from nodules of Trifolium repens (white clover) and T. subterraneum (subterranean clover) grown in the soils. The strains were cultured and screened against a range of pH-amended media to demonstrate whether they were adapted to pH levels similar to their native soils. The strains which showed high relative growth at a given pH (~20% of those isolated) were selected for metabolic and taxonomic profiling. The Omnilog (Biolog Inc., Hayward, CA) phenotype array was used to perform assays on carbon (C) utilisation for selected strains. DNA was extracted from the strains which had differing C utilisation profiles and PCR products for both forward and reverse primers were sequenced for the following genes: 16S rRNA, recA, nodC, nodD and nifH (symbiotic).Keywords: bacterial diversity, clover, metabolic and taxonomic profiling, pH adaptation, rhizobia
Procedia PDF Downloads 25916318 Effects of Bilateral Electroconvulsive Therapy on Autobiographical Memories in Asian Patients
Authors: Lai Gwen Chan, Yining Ong, Audrey Yoke Poh Wong
Abstract:
Background. The efficacy of electroconvulsive therapy (ECT) as a form of treatment to a range of mental disorders is well-established. However, ECT is often associated with either temporary or persistent cognitive side-effects, resulting in the failure of wider prescription. Of which, retrograde amnesia is the most commonly reported cognitive side-effect. Most studies found a recalling deficit in autobiographical memories to be short-term, although a few have reported more persistent amnesic effects. Little is known about ECT-related amnesic effects in Asian population. Hence, this study aims to resolve conflicting findings, as well as to better elucidate the effects of ECT on cognitive functioning in a local sample. Method: 12 patients underwent bilateral ECT under the care of Psychological Medicine Department, Tan Tock Seng Hospital, Singapore. Participants’ cognition and level of functioning were assessed at four time-points: before ECT, between the third and fourth induced seizure, at the end of the whole course of ECT, and two months after the index course of ECT. Results: It was found that Global Assessment of Functioning scores increased significantly at the completion of ECT. Case-by-case analyses also revealed an overall improvement in Personal Semantic and Autobiographical memory two months after the index course of ECT. A transient dip in both personal semantic and autobiographical memory scores was observed in one participant between the third and fourth induced seizure, but subsequently resolved and showed better performance than at baseline. Conclusions: The findings of this study suggest that ECT is an effective form of treatment to alleviate the severity of symptoms of the diagnosis. ECT does not affect attention, language, executive functioning, personal semantic and autobiographical memory adversely. The findings suggest that Asian patients may respond to bilateral ECT differently from Western samples.Keywords: electroconvulsive therapy (ECT), autobiographical memory, cognitive impairment, psychiatric disorder
Procedia PDF Downloads 19316317 Analytical Model of Locomotion of a Thin-Film Piezoelectric 2D Soft Robot Including Gravity Effects
Authors: Zhiwu Zheng, Prakhar Kumar, Sigurd Wagner, Naveen Verma, James C. Sturm
Abstract:
Soft robots have drawn great interest recently due to a rich range of possible shapes and motions they can take on to address new applications, compared to traditional rigid robots. Large-area electronics (LAE) provides a unique platform for creating soft robots by leveraging thin-film technology to enable the integration of a large number of actuators, sensors, and control circuits on flexible sheets. However, the rich shapes and motions possible, especially when interacting with complex environments, pose significant challenges to forming well-generalized and robust models necessary for robot design and control. In this work, we describe an analytical model for predicting the shape and locomotion of a flexible (steel-foil-based) piezoelectric-actuated 2D robot based on Euler-Bernoulli beam theory. It is nominally (unpowered) lying flat on the ground, and when powered, its shape is controlled by an array of piezoelectric thin-film actuators. Key features of the models are its ability to incorporate the significant effects of gravity on the shape and to precisely predict the spatial distribution of friction against the contacting surfaces, necessary for determining inchworm-type motion. We verified the model by developing a distributed discrete element representation of a continuous piezoelectric actuator and by comparing its analytical predictions to discrete-element robot simulations using PyBullet. Without gravity, predicting the shape of a sheet with a linear array of piezoelectric actuators at arbitrary voltages is straightforward. However, gravity significantly distorts the shape of the sheet, causing some segments to flatten against the ground. Our work includes the following contributions: (i) A self-consistent approach was developed to exactly determine which parts of the soft robot are lifted off the ground, and the exact shape of these sections, for an arbitrary array of piezoelectric voltages and configurations. (ii) Inchworm-type motion relies on controlling the relative friction with the ground surface in different sections of the robot. By adding torque-balance to our model and analyzing shear forces, the model can then determine the exact spatial distribution of the vertical force that the ground is exerting on the soft robot. Through this, the spatial distribution of friction forces between ground and robot can be determined. (iii) By combining this spatial friction distribution with the shape of the soft robot, in the function of time as piezoelectric actuator voltages are changed, the inchworm-type locomotion of the robot can be determined. As a practical example, we calculated the performance of a 5-actuator system on a 50-µm thick steel foil. Piezoelectric properties of commercially available thin-film piezoelectric actuators were assumed. The model predicted inchworm motion of up to 200 µm per step. For independent verification, we also modelled the system using PyBullet, a discrete-element robot simulator. To model a continuous thin-film piezoelectric actuator, we broke each actuator into multiple segments, each of which consisted of two rigid arms with appropriate mass connected with a 'motor' whose torque was set by the applied actuator voltage. Excellent agreement between our analytical model and the discrete-element simulator was shown for both for the full deformation shape and motion of the robot.Keywords: analytical modeling, piezoelectric actuators, soft robot locomotion, thin-film technology
Procedia PDF Downloads 180