Search results for: Reproduction performance.
249 Controller Design for Euler-Bernoulli Smart Structures Using Robust Decentralized FOS via Reduced Order Modeling
Authors: T.C. Manjunath, B. Bandyopadhyay
Abstract:
This paper features the modeling and design of a Robust Decentralized Fast Output Sampling (RDFOS) Feedback control technique for the active vibration control of a smart flexible multimodel Euler-Bernoulli cantilever beams for a multivariable (MIMO) case by retaining the first 6 vibratory modes. The beam structure is modeled in state space form using the concept of piezoelectric theory, the Euler-Bernoulli beam theory and the Finite Element Method (FEM) technique by dividing the beam into 4 finite elements and placing the piezoelectric sensor / actuator at two finite element locations (positions 2 and 4) as collocated pairs, i.e., as surface mounted sensor / actuator, thus giving rise to a multivariable model of the smart structure plant with two inputs and two outputs. Five such multivariable models are obtained by varying the dimensions (aspect ratios) of the aluminium beam. Using model order reduction technique, the reduced order model of the higher order system is obtained based on dominant Eigen value retention and the Davison technique. RDFOS feedback controllers are designed for the above 5 multivariable-multimodel plant. The closed loop responses with the RDFOS feedback gain and the magnitudes of the control input are obtained and the performance of the proposed multimodel smart structure system is evaluated for vibration control.Keywords: Smart structure, Euler-Bernoulli beam theory, Fastoutput sampling feedback control, Finite Element Method, Statespace model, Vibration control, LMI, Model order Reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756248 Semantic Enhanced Social Media Sentiments for Stock Market Prediction
Authors: K. Nirmala Devi, V. Murali Bhaskaran
Abstract:
Traditional document representation for classification follows Bag of Words (BoW) approach to represent the term weights. The conventional method uses the Vector Space Model (VSM) to exploit the statistical information of terms in the documents and they fail to address the semantic information as well as order of the terms present in the documents. Although, the phrase based approach follows the order of the terms present in the documents rather than semantics behind the word. Therefore, a semantic concept based approach is used in this paper for enhancing the semantics by incorporating the ontology information. In this paper a novel method is proposed to forecast the intraday stock market price directional movement based on the sentiments from Twitter and money control news articles. The stock market forecasting is a very difficult and highly complicated task because it is affected by many factors such as economic conditions, political events and investor’s sentiment etc. The stock market series are generally dynamic, nonparametric, noisy and chaotic by nature. The sentiment analysis along with wisdom of crowds can automatically compute the collective intelligence of future performance in many areas like stock market, box office sales and election outcomes. The proposed method utilizes collective sentiments for stock market to predict the stock price directional movements. The collective sentiments in the above social media have powerful prediction on the stock price directional movements as up/down by using Granger Causality test.
Keywords: Bag of Words, Collective Sentiments, Ontology, Semantic relations, Sentiments, Social media, Stock Prediction, Twitter, Vector Space Model and wisdom of crowds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2805247 Mechanical Behavior of Recycled Pet Fiber Reinforced Concrete Matrix
Authors: Comingstarful Marthong, Deba Kumar Sarma
Abstract:
Concrete is strong in compression however weak in tension. The tensile strength as well as ductile property of concrete could be improved by addition of short dispersed fibers. Polyethylene terephthalate (PET) fiber obtained from hand cutting or mechanical slitting of plastic sheets generally used as discrete reinforcement in substitution of steel fiber. PET fiber obtained from the former process is in the form of straight slit sheet pattern that impart weaker mechanical bonding behavior in the concrete matrix. To improve the limitation of straight slit sheet fiber the present study considered two additional geometry of fiber namely (a) flattened end slit sheet and (b) deformed slit sheet. The mix for plain concrete was design for a compressive strength of 25 MPa at 28 days curing time with a watercement ratio of 0.5. Cylindrical and beam specimens with 0.5% fibers volume fraction and without fibers were cast to investigate the influence of geometry on the mechanical properties of concrete. The performance parameters mainly studied include flexural strength, splitting tensile strength, compressive strength and ultrasonic pulse velocity (UPV). Test results show that geometry of fiber has a marginal effect on the workability of concrete. However, it plays a significant role in achieving a good compressive and tensile strength of concrete. Further, significant improvement in term of flexural and energy dissipation capacity were observed from other fibers as compared to the straight slit sheet pattern. Also, the inclusion of PET fiber improved the ability in absorbing energy in the post-cracking state of the specimen as well as no significant porous structures.Keywords: Concrete matrix, polyethylene terephthalate (PET) fibers, mechanical bonding, mechanical properties, UPV.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2057246 Data Privacy and Safety with Large Language Models
Authors: Ashly Joseph, Jithu Paulose
Abstract:
Large language models (LLMs) have revolutionized natural language processing capabilities, enabling applications such as chatbots, dialogue agents, image, and video generators. Nevertheless, their trainings on extensive datasets comprising personal information poses notable privacy and safety hazards. This study examines methods for addressing these challenges, specifically focusing on approaches to enhance the security of LLM outputs, safeguard user privacy, and adhere to data protection rules. We explore several methods including post-processing detection algorithms, content filtering, reinforcement learning from human and AI inputs, and the difficulties in maintaining a balance between model safety and performance. The study also emphasizes the dangers of unintentional data leakage, privacy issues related to user prompts, and the possibility of data breaches. We highlight the significance of corporate data governance rules and optimal methods for engaging with chatbots. In addition, we analyze the development of data protection frameworks, evaluate the adherence of LLMs to General Data Protection Regulation (GDPR), and examine privacy legislation in academic and business policies. We demonstrate the difficulties and remedies involved in preserving data privacy and security in the age of sophisticated artificial intelligence by employing case studies and real-life instances. This article seeks to educate stakeholders on practical strategies for improving the security and privacy of LLMs, while also assuring their responsible and ethical implementation.
Keywords: Data privacy, large language models, artificial intelligence, machine learning, cybersecurity, general data protection regulation, data safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 142245 The Role of Fluid Catalytic Cracking in Process Optimisation for Petroleum Refineries
Authors: Chinwendu R. Nnabalu, Gioia Falcone, Imma Bortone
Abstract:
Petroleum refining is a chemical process in which the raw material (crude oil) is converted to finished commercial products for end users. The fluid catalytic cracking (FCC) unit is a key asset in refineries, requiring optimised processes in the context of engineering design. Following the first stage of separation of crude oil in a distillation tower, an additional 40 per cent quantity is attainable in the gasoline pool with further conversion of the downgraded product of crude oil (residue from the distillation tower) using a catalyst in the FCC process. Effective removal of sulphur oxides, nitrogen oxides, carbon and heavy metals from FCC gasoline requires greater separation efficiency and involves an enormous environmental significance. The FCC unit is primarily a reactor and regeneration system which employs cyclone systems for separation. Catalyst losses in FCC cyclones lead to high particulate matter emission on the regenerator side and fines carryover into the product on the reactor side. This paper aims at demonstrating the importance of FCC unit design criteria in terms of technical performance and compliance with environmental legislation. A systematic review of state-of-the-art FCC technology was carried out, identifying its key technical challenges and sources of emissions. Case studies of petroleum refineries in Nigeria were assessed against selected global case studies. The review highlights the need for further modelling investigations to help improve FCC design to more effectively meet product specification requirements while complying with stricter environmental legislation.
Keywords: Design, emissions, fluid catalytic cracking, petroleum refineries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895244 Biodegradation of Malathion by Acinetobacter baumannii Strain AFA Isolated from Domestic Sewage in Egypt
Authors: Ahmed F. Azmy , Amal E. Saafan, Tamer M. Essam, Magdy A. Amin, Shaban H. Ahmed
Abstract:
Bacterial strains capable of degradation of malathion from the domestic sewage were isolated by an enrichment culture technique. Three bacterial strains were screened and identified as Acinetobacter baumannii (AFA), Pseudomonas aeruginosa (PS1), and Pseudomonas mendocina (PS2) based on morphological, biochemical identification and 16S rRNA sequence analysis. Acinetobacter baumannii AFA was the most efficient malathion degrading bacterium, so used for further biodegradation study. AFA was able to grow in mineral salt medium (MSM) supplemented with malathion (100 mg/l) as a sole carbon source, and within 14 days, 84% of the initial dose was degraded by the isolate measured by high performance liquid chromatography. Strain AFA could also degrade other organophosphorus compounds including diazinon, chlorpyrifos and fenitrothion. The effect of different culture conditions on the degradation of malathion like inoculum density, other carbon or nitrogen sources, temperature and shaking were examined. Degradation of malathion and bacterial cell growth were accelerated when culture media were supplemented with yeast extract, glucose and citrate. The optimum conditions for malathion degradation by strain AFA were; an inoculum density of 1.5x 10^12CFU/ml at 30°C with shaking. A specific polymerase chain reaction primers were designed manually using multiple sequence alignment of the corresponding carboxylesterase enzymes of Acinetobacter species. Sequencing result of amplified PCR product and phylogenetic analysis showed low degree of homology with the other carboxylesterase enzymes of Acinetobacter strains, so we suggested that this enzyme is a novel esterase enzyme. Isolated bacterial strains may have potential role for use in bioremediation of malathion contaminated.
Keywords: Acinetobacter baumannii, biodegradation, Malathion, organophosphate pesticides.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3513243 Use of Multiple Linear Regressions to Evaluate the Influence of O3 and PM10 on Biological Pollutants
Authors: S. I. V. Sousa, F.G. Martins, M. C. Pereira, M. C. M. Alvim-Ferraz, H. Ribeiro, M. Oliveira, I. Abreu
Abstract:
Exposure to ambient air pollution has been linked to a number of health outcomes, starting from modest transient changes in the respiratory tract and impaired pulmonary function, continuing to restrict activity/reduce performance and to the increase emergency rooms visits, hospital admissions or mortality. The increase of allergenic symptoms has been associated with air contaminants such as ozone, particulate matter, fungal spores and pollen. Considering the potential relevance of crossed effects of nonbiological pollutants and airborne pollens and fungal spores on allergy worsening, the aim of this work was to evaluate the influence of non-biological pollutants (O3 and PM10) and meteorological parameters on the concentrations of pollen and fungal spores using multiple linear regressions. The data considered in this study were collected in Oporto which is the second largest Portuguese city, located in the North. Daily mean of O3, PM10, pollen and fungal spore concentrations, temperature, relative humidity, precipitation, wind velocity, pollen and fungal spore concentrations, for 2003, 2004 and 2005 were considered. Results showed that the 90th percentile of the adjusted coefficient of determination, P90 (R2aj), of the multiple regressions varied from 0.613 to 0.916 for pollen and from 0.275 to 0.512 for fungal spores. O3 and PM10 showed to have some influence on the biological pollutants. Among the meteorological parameters analysed, temperature was the one that most influenced the pollen and fungal spores airborne concentrations. Relative humidity also showed to have some influence on the fungal spore dispersion. Nevertheless, the models for each pollen and fungal spore were different depending on the analysed period, which means that the correlations identified as statistically significant can not be, even so, consistent enough.Keywords: Air pollutants, meteorological parameters, biologicalpollutants, multiple linear correlations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599242 Application of Fuzzy Logic Approach for an Aircraft Model with and without Winglet
Authors: Altab Hossain, Ataur Rahman, Jakir Hossen, A.K.M. P. Iqbal, SK. Hasan
Abstract:
The measurement of aerodynamic forces and moments acting on an aircraft model is important for the development of wind tunnel measurement technology to predict the performance of the full scale vehicle. The potentials of an aircraft model with and without winglet and aerodynamic characteristics with NACA wing No. 65-3- 218 have been studied using subsonic wind tunnel of 1 m × 1 m rectangular test section and 2.5 m long of Aerodynamics Laboratory Faculty of Engineering (University Putra Malaysia). Focusing on analyzing the aerodynamic characteristics of the aircraft model, two main issues are studied in this paper. First, a six component wind tunnel external balance is used for measuring lift, drag and pitching moment. Secondly, Tests are conducted on the aircraft model with and without winglet of two configurations at Reynolds numbers 1.7×105, 2.1×105, and 2.5×105 for different angle of attacks. Fuzzy logic approach is found as efficient for the representation, manipulation and utilization of aerodynamic characteristics. Therefore, the primary purpose of this work was to investigate the relationship between lift and drag coefficients, with free-stream velocities and angle of attacks, and to illustrate how fuzzy logic might play an important role in study of lift aerodynamic characteristics of an aircraft model with the addition of certain winglet configurations. Results of the developed fuzzy logic were compared with the experimental results. For lift coefficient analysis, the mean of actual and predicted values were 0.62 and 0.60 respectively. The coreelation between actual and predicted values (from FLS model) of lift coefficient in different angle of attack was found as 0.99. The mean relative error of actual and predicted valus was found as 5.18% for the velocity of 26.36 m/s which was found to be less than the acceptable limits (10%). The goodness of fit of prediction value was 0.95 which was close to 1.0.Keywords: Wind tunnel; Winglet; Lift coefficient; Fuzzy logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909241 Copper Price Prediction Model for Various Economic Situations
Authors: Haidy S. Ghali, Engy Serag, A. Samer Ezeldin
Abstract:
Copper is an essential raw material used in the construction industry. During 2021 and the first half of 2022, the global market suffered from a significant fluctuation in copper raw material prices due to the aftermath of both the COVID-19 pandemic and the Russia-Ukraine war which exposed its consumers to an unexpected financial risk. Thereto, this paper aims to develop two hybrid price prediction models using artificial neural network and long short-term memory (ANN-LSTM), by Python, that can forecast the average monthly copper prices, traded in the London Metal Exchange; the first model is a multivariate model that forecasts the copper price of the next 1-month and the second is a univariate model that predicts the copper prices of the upcoming three months. Historical data of average monthly London Metal Exchange copper prices are collected from January 2009 till July 2022 and potential external factors are identified and employed in the multivariate model. These factors lie under three main categories: energy prices, and economic indicators of the three major exporting countries of copper depending on the data availability. Before developing the LSTM models, the collected external parameters are analyzed with respect to the copper prices using correlation, and multicollinearity tests in R software; then, the parameters are further screened to select the parameters that influence the copper prices. Then, the two LSTM models are developed, and the dataset is divided into training, validation, and testing sets. The results show that the performance of the 3-month prediction model is better than the 1-month prediction model; but still, both models can act as predicting tools for diverse economic situations.
Keywords: Copper prices, prediction model, neural network, time series forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 193240 Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis
Authors: Isao Taguchi, Yasuo Sugai
Abstract:
This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.
Keywords: data selection, function approximation problem, multistage leaning, neural network, voluntary oscillation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1433239 How Children Synchronize with Their Teacher: Evidence from a Real-World Elementary School Classroom
Authors: Reiko Yamamoto
Abstract:
This paper reports on how synchrony occurs between children and their teacher, and what prevents or facilitates synchrony. The aim of the experiment conducted in this study was to precisely analyze their movements and synchrony and reveal the process of synchrony in a real-world classroom. Specifically, the experiment was conducted for around 20 minutes during an English as a foreign language (EFL) lesson. The participants were 11 fourth-grade school children and their classroom teacher in a public elementary school in Japan. Previous researchers assert that synchrony causes the state of flow in a class. For checking the level of flow, Short Flow State Scale (SFSS) was adopted. The experimental procedure had four steps: 1) The teacher read aloud the first half of an English storybook to the children. Both the teacher and the children were at their own desks. 2) The children were subjected to an SFSS check. 3) The teacher read aloud the remaining half of the storybook to the children. She made the children remove their desks before reading. 4) The children were again subjected to an SFSS check. The movements of all participants were recorded with a video camera. From the movement analysis, it was found that the children synchronized better with the teacher in Step 3 than in Step 1, and that the teacher’s movement became free and outstanding without a desk. This implies that the desk acted as a barrier between the children and the teacher. Removal of this barrier resulted in the children’s reactions becoming synchronized with those of the teacher. The SFSS results proved that the children experienced more flow without a barrier than with a barrier. Apparently, synchrony is what caused flow or social emotions in the classroom. The main conclusion is that synchrony leads to cognitive outcomes such as children’s academic performance in EFL learning.
Keywords: Movement synchrony, teacher–child relationships, English as a foreign language, EFL learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 704238 Multipath Routing Protocol Using Basic Reconstruction Routing (BRR) Algorithm in Wireless Sensor Network
Authors: K. Rajasekaran, Kannan Balasubramanian
Abstract:
A sensory network consists of multiple detection locations called sensor nodes, each of which is tiny, featherweight and portable. A single path routing protocols in wireless sensor network can lead to holes in the network, since only the nodes present in the single path is used for the data transmission. Apart from the advantages like reduced computation, complexity and resource utilization, there are some drawbacks like throughput, increased traffic load and delay in data delivery. Therefore, multipath routing protocols are preferred for WSN. Distributing the traffic among multiple paths increases the network lifetime. We propose a scheme, for the data to be transmitted through a dominant path to save energy. In order to obtain a high delivery ratio, a basic route reconstruction protocol is utilized to reconstruct the path whenever a failure is detected. A basic reconstruction routing (BRR) algorithm is proposed, in which a node can leap over path failure by using the already existing routing information from its neighbourhood while the composed data is transmitted from the source to the sink. In order to save the energy and attain high data delivery ratio, data is transmitted along a multiple path, which is achieved by BRR algorithm whenever a failure is detected. Further, the analysis of how the proposed protocol overcomes the drawback of the existing protocols is presented. The performance of our protocol is compared to AOMDV and energy efficient node-disjoint multipath routing protocol (EENDMRP). The system is implemented using NS-2.34. The simulation results show that the proposed protocol has high delivery ratio with low energy consumption.Keywords: Multipath routing, WSN, energy efficient routing, alternate route, assured data delivery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1725237 Evaluating the Validity of Computational Fluid Dynamics Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements
Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck
Abstract:
This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the mean geometric bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.
Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 388236 Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain
Authors: Suman Senapati, Goutam Saha
Abstract:
Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.Keywords: Speaker Identification, Log Gabor Wavelet, Bayesian Bivariate Estimator, Circularly Symmetric Probability Density Function, SIRP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1654235 Comparative Study on the Effect of Substitution of Li and Mg Instead of Ca on Structural and Biological Behaviors of Silicate Bioactive Glass
Authors: Alireza Arab, Morteza Elsa, Amirhossein Moghanian
Abstract:
In this study, experiments were carried out to achieve a promising multifunctional and modified silicate based bioactive glass (BG). The main aim of the study was investigating the effect of lithium (Li) and magnesium (Mg) substitution, on in vitro bioactivity of substituted-58S BG. Moreover, it is noteworthy to state that modified BGs were synthesized in 60SiO2–(36-x)CaO–4P2O5–(x)Li2O and 60SiO2–(36-x)CaO–4P2O5–(x)MgO (where x = 0, 5, 10 mol.%) quaternary systems, by sol-gel method. Their performance was investigated through different aspects such as biocompatibility, antibacterial activity as well as their effect on alkaline phosphatase (ALP) activity, and proliferation of MC3T3 cells. The antibacterial efficiency was evaluated against methicillin-resistant Staphylococcus aureus bacteria. To do so, CaO was substituted with Li2O and MgO up to 10 mol % in 58S-BGs and then samples were immersed in simulated body fluid up to 14 days and then, characterized by X-ray diffraction, Fourier transform infrared spectroscopy, inductively coupled plasma atomic emission spectrometry, and scanning electron microscopy. Results indicated that this modification led to a retarding effect on in vitro hydroxyapatite (HA) formation due to the lower supersaturation degree for nucleation of HA compared with 58s-BG. Meanwhile, magnesium revealed further pronounced effect. The 3-(4,5 dimethylthiazol-2-yl)-2,5 diphenyltetrazolium bromide (MTT) and ALP analysis illustrated that substitutions of both Li2O and MgO, up to 5 mol %, had increasing effect on biocompatibility and stimulating proliferation of the pre-osteoblast MC3T3 cells in comparison to the control specimen. Regarding to bactericidal efficiency, the substitution of either Li or Mg for Ca in the 58s BG composition led to statistically significant difference in antibacterial behaviors of substituted-BGs. Meanwhile, the sample containing 5 mol % CaO/Li2O substitution (BG-5L) was selected as a multifunctional biomaterial in bone repair/regeneration due to the improved biocompatibility, enhanced ALP activity and antibacterial efficiency among all of the synthesized L-BGs and M-BGs.Keywords: Alkaline, alkaline earth, bioactivity, biomedical applications, sol-gel processes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 579234 Improving the Software Homologation Process through Peer Review: An Experience Report on Android Development Environment
Authors: Camila Bernardon, Diana Lemos, Mario Garcia, Thiago Souto, Bruno Bonifacio
Abstract:
In the current technological market environment, ensuring the quality of new products has become a complex challenge. In this scenario, companies have been investing in solutions that aim to reduce the execution time of software testing and lead to cost efficiency. However, companies that have a complex and specialized testing environment usually face barriers related to costly testing processes, especially in distributed settings. Sidia Institute of Technology works on research and development for the Android platform for mobile devices in Latin America. As we work in a global software development (GSD) scope, we have faced barriers caused by failures detected lately that have caused delays in the homologation release process on Android projects. Thus, we adopt an Internal Review process, using as an alternative to reduce these failures. In this paper it was presented the experience of a homologation team adopting an Internal Review process in order to increase the performance through of improving test efficiency. Using this approach, it was possible to realize a substantial improvement in quality, reliability and timeliness of our deliveries. Through the quantitative analyses, it was possible identify a positive growth in homologation efficiency of 6% after adoption of the process. In addition, we performed a qualitative analysis from the collected data through an online questionnaire. In particular, results show that association between failure reduction and review process adoption provides the most quality that has a positive effect on project milestones. We hope this report can be helpful to other companies and the scientific community to improve their process thereby increasing competitive advantages.
Keywords: Android, GSD, improvement quality process, mobile products.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 493233 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.
Keywords: Harmonics, passive filter, power factor, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2194232 Experimental Investigation into Chaotic Features of Flow Gauges in Automobile Fuel Metering System
Authors: S. K. Fasogbon
Abstract:
Chaotic system may lead to instability, extreme sensitivity and performance reduction in control systems. It is therefore important to understand the causes of such undesirable characteristics in control system especially in the automobile fuel gauges. This is because without accurate fuel gauges in automobile systems, it will be difficult if not impossible to embark on a journey whether during odd hours of the day or where fuel is difficult to obtain. To this end, this work studied the impacts of fuel tank rust and faulty component of fuel gauge system (voltage stabilizer) on the chaotic characteristics of fuel gauges. The results obtained were analyzed using Graph iSOFT package. Over the range of experiments conducted, the results obtained showed that rust effect of the fuel tank would alter the flow density, consequently the fluid pressure and ultimately the flow velocity of the fuel. The responses of the fuel gauge pointer to the faulty voltage stabilizer were erratic causing noticeable instability of gauge measurands indicated. The experiment also showed that the fuel gauge performed optimally by indicating the highest degree of accuracy when combined the effect of rust free tank and non-faulty voltage stabilizer conditions (± 6.75% measurand error) as compared to only the rust free tank situation (± 15% measurand error) and only the non-faulty voltage stabilizer condition (± 40% measurand error). The study concludes that both the fuel tank rust and the faulty voltage stabilizer gauge component have a significant effect on the sensitivity of fuel gauge and its accuracy ultimately. Also, by the reason of literature, our findings can also be said to be valid for all other fluid meters and gauges applicable in plant machineries and most hydraulic systems.Keywords: Chaotic system, degree of accuracy, measurand, sensitivity of fuel gauge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 955231 Optimal Design of Selective Excitation Pulses in Magnetic Resonance Imaging using Genetic Algorithms
Authors: Mohammed A. Alolfe, Abou-Bakr M. Youssef, Yasser M. Kadah
Abstract:
The proper design of RF pulses in magnetic resonance imaging (MRI) has a direct impact on the quality of acquired images, and is needed for many applications. Several techniques have been proposed to obtain the RF pulse envelope given the desired slice profile. Unfortunately, these techniques do not take into account the limitations of practical implementation such as limited amplitude resolution. Moreover, implementing constraints for special RF pulses on most techniques is not possible. In this work, we propose to develop an approach for designing optimal RF pulses under theoretically any constraints. The new technique will pose the RF pulse design problem as a combinatorial optimization problem and uses efficient techniques from this area such as genetic algorithms (GA) to solve this problem. In particular, an objective function will be proposed as the norm of the difference between the desired profile and the one obtained from solving the Bloch equations for the current RF pulse design values. The proposed approach will be verified using analytical solution based RF simulations and compared to previous methods such as Shinnar-Le Roux (SLR) method, and analysis, selected, and tested the options and parameters that control the Genetic Algorithm (GA) can significantly affect its performance to get the best improved results and compared to previous works in this field. The results show a significant improvement over conventional design techniques, select the best options and parameters for GA to get most improvement over the previous works, and suggest the practicality of using of the new technique for most important applications as slice selection for large flip angles, in the area of unconventional spatial encoding, and another clinical use.
Keywords: Selective excitation, magnetic resonance imaging, combinatorial optimization, pulse design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615230 Travel Time Evaluation of an Innovative U-Turn Facility on Urban Arterial Roadways
Authors: Ali Pirdavani, Tom Brijs, Tom Bellemans, Geert Wets, Koen Vanhoof
Abstract:
Signalized intersections on high-volume arterials are often congested during peak hours, causing a decrease in through movement efficiency on the arterial. Much of the vehicle delay incurred at conventional intersections is caused by high left-turn demand. Unconventional intersection designs attempt to reduce intersection delay and travel time by rerouting left-turns away from the main intersection and replacing it with right-turn followed by Uturn. The proposed new type of U-turn intersection is geometrically designed with a raised island which provides a protected U-turn movement. In this study several scenarios based on different distances between U-turn and main intersection, traffic volume of major/minor approaches and percentage of left-turn volumes were simulated by use of AIMSUN, a type of traffic microsimulation software. Subsequently some models are proposed in order to compute travel time of each movement. Eventually by correlating these equations to some in-field collected data of some implemented U-turn facilities, the reliability of the proposed models are approved. With these models it would be possible to calculate travel time of each movement under any kind of geometric and traffic condition. By comparing travel time of a conventional signalized intersection with U-turn intersection travel time, it would be possible to decide on converting signalized intersections into this new kind of U-turn facility or not. However comparison of travel time is not part of the scope of this research. In this paper only travel time of this innovative U-turn facility would be predicted. According to some before and after study about the traffic performance of some executed U-turn facilities, it is found that commonly, this new type of U-turn facility produces lower travel time. Thus, evaluation of using this type of unconventional intersection should be seriously considered.Keywords: Innovative U-turn facility, Microsimulation, Traveltime, Unconventional intersection design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346229 Composite Coatings of Piezoelectric Quartz Sensors Based on Viscous Sorbents and Casein Micelles
Authors: Anastasiia Shuba, Tatiana Kuchmenko, Umarkhanov Ruslan, Bogdanova Ekaterina
Abstract:
The development of new sensitive coatings for sensors is one of the key directions in the development of sensor technologies. Recently, there has been a trend towards the creation of multicomponent coatings for sensors, which make it possible to increase the sensitivity, and specificity, and improve the performance properties of sensors. When analyzing samples with a complex matrix of biological origin, the inclusion of micelles of bioactive substances (amino and nucleic acids, peptides, proteins) in the composition of the sensor coating can also increase useful analytical information. The purpose of this work is to evaluate the analytical characteristics of composite coatings of piezoelectric quartz sensors based on medium-molecular viscous sorbents with incorporated micellar casein concentrate during the sorption of vapors of volatile organic compounds. The sorption properties of the coatings were studied by piezoelectric quartz microbalance. Macromolecular compounds (dicyclohexyl-18-crown-6, triton X-100, lanolin, micellar casein concentrate) were used as sorbents. Highly volatile organic compounds of various classes (alcohols, acids, aldehydes, esters) and water were selected as test substances. It has been established that composite coatings of sensors with the inclusion of micellar casein are more stable and selective to vapors of highly volatile compounds than to water vapors. The method and technique of forming a composite coating using molecular viscous sorbents does not affect the kinetic features of VOC sorption. When casein micelles are used, the features of kinetic sorption depend on the matrix of the coating.
Keywords: Composite coating, piezoelectric quartz microbalance, sensor, volatile organic compounds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 160228 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.
Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 180227 Microstructural Evolution of an Interface Region in a Nickel-Based Superalloy Joint Produced by Direct Energy Deposition
Authors: M. Ferguson, T. Konkova, I. Violatos
Abstract:
Microstructure analysis of additively manufactured (AM) materials is an important step in understanding the interrelationship between mechanical properties and materials performance. Literature on the effect of a laser-based AM process parameters on the microstructure in the substrate-deposit interface is limited. The interface region, the adjoining area of substrate and deposit, is characterized by the presence of the fusion zone (FZ) and heat affected zone (HAZ) experiencing rapid thermal gyrations resulting in thermal induced transformations. Inconel 718 was utilized as a work material for both the substrate and deposit. Three blocks of Inconel 718 material were deposited by Direct Energy Deposition (DED) using three different laser powers, 550W, 750W and 950W, respectively. A coupled thermo-mechanical transient approach was utilized to correlate temperature history to the evolution of microstructure. Thermal history of the deposition process was monitored with the thermocouples installed inside the substrate material. Interface region of the blocks were analysed with Optical Microscopy (OM) and Scanning Electron Microscopy (SEM) including electron back-scattered diffraction (EBSD) technique. Laser power was found to influence the dissolution of intermetallic precipitated phases in the substrate and grain growth in the interface region. Microstructure and thermal history data were utilized to draw conclusive comparisons between the investigated process parameters.
Keywords: Additive manufacturing, direct energy deposition, electron back-scatter diffraction, finite element analysis, Inconel 718, microstructure, optical microscopy, scanning electron microscopy, substrate-deposit interface region.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 509226 Identification of Risks Associated with Process Automation Systems
Authors: J. K. Visser, H. T. Malan
Abstract:
A need exists to identify the sources of risks associated with the process automation systems within petrochemical companies or similar energy related industries. These companies use many different process automation technologies in its value chain. A crucial part of the process automation system is the information technology component featuring in the supervisory control layer. The ever-changing technology within the process automation layers and the rate at which it advances pose a risk to safe and predictable automation system performance. The age of the automation equipment also provides challenges to the operations and maintenance managers of the plant due to obsolescence and unavailability of spare parts. The main objective of this research was to determine the risk sources associated with the equipment that is part of the process automation systems. A secondary objective was to establish whether technology managers and technicians were aware of the risks and share the same viewpoint on the importance of the risks associated with automation systems. A conceptual model for risk sources of automation systems was formulated from models and frameworks in literature. This model comprised six categories of risk which forms the basis for identifying specific risks. This model was used to develop a questionnaire that was sent to 172 instrument technicians and technology managers in the company to obtain primary data. 75 completed and useful responses were received. These responses were analyzed statistically to determine the highest risk sources and to determine whether there was difference in opinion between technology managers and technicians. The most important risks that were revealed in this study are: 1) the lack of skilled technicians, 2) integration capability of third-party system software, 3) reliability of the process automation hardware, 4) excessive costs pertaining to performing maintenance and migrations on process automation systems, and 5) requirements of having third-party communication interfacing compatibility as well as real-time communication networks.
Keywords: Distributed control system, identification of risks, information technology, process automation system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 974225 Primary School Teachers’ Conceptual and Procedural Knowledge of Rational Number and Its Effects on Pupils’ Achievement in Rational Numbers
Authors: R. M. Kashim
Abstract:
The study investigated primary school teachers’ conceptual and procedural knowledge of rational numbers and its effects on pupil’s achievement in rational numbers. Specifically, primary school teachers’ level of conceptual knowledge about rational numbers, primary school teachers’ level of procedural knowledge about rational numbers, and the effects of teachers conceptual and procedural knowledge on their pupils understanding of rational numbers in primary schools is investigated. The study was carried out in Bauchi metropolis in the Bauchi state of Nigeria. The design of the study was a multi-stage design. The first stage was a descriptive design. The second stage involves a pre-test, post-test only quasi-experimental design. Two instruments were used for the data collection in the study. These were Conceptual and Procedural knowledge test (CPKT) and Rational number achievement test (RAT), the population of the study comprises of three (3) mathematics teachers’ holders of Nigerian Certificate in Education (NCE) teaching primary six and 210 pupils in their intact classes were used for the study. The data collected were analyzed using mean, standard deviation, analysis of variance, analysis of covariance and t- test. The findings indicated that the pupils taught rational number by a teacher that has high conceptual and procedural knowledge understand and perform better than the pupil taught by a teacher who has low conceptual and procedural knowledge of rational number. It is, therefore, recommended that teachers in primary schools should be encouraged to enrich their conceptual knowledge of rational numbers. Also, the superiority performance of teachers in procedural knowledge in rational number should not become an obstruction of understanding. Teachers Conceptual and procedural knowledge of rational numbers should be balanced so that primary school pupils will have a view of better teaching and learning of rational number in our contemporary schools.Keywords: Achievement, conceptual knowledge, procedural knowledge, rational numbers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 890224 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study
Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin
Abstract:
Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.Keywords: Objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1637223 MAGNI Dynamics: A Vision-Based Kinematic and Dynamic Upper-Limb Model for Intelligent Robotic Rehabilitation
Authors: Alexandros Lioulemes, Michail Theofanidis, Varun Kanal, Konstantinos Tsiakas, Maher Abujelala, Chris Collander, William B. Townsend, Angie Boisselle, Fillia Makedon
Abstract:
This paper presents a home-based robot-rehabilitation instrument, called ”MAGNI Dynamics”, that utilized a vision-based kinematic/dynamic module and an adaptive haptic feedback controller. The system is expected to provide personalized rehabilitation by adjusting its resistive and supportive behavior according to a fuzzy intelligence controller that acts as an inference system, which correlates the user’s performance to different stiffness factors. The vision module uses the Kinect’s skeletal tracking to monitor the user’s effort in an unobtrusive and safe way, by estimating the torque that affects the user’s arm. The system’s torque estimations are justified by capturing electromyographic data from primitive hand motions (Shoulder Abduction and Shoulder Forward Flexion). Moreover, we present and analyze how the Barrett WAM generates a force-field with a haptic controller to support or challenge the users. Experiments show that by shifting the proportional value, that corresponds to different stiffness factors of the haptic path, can potentially help the user to improve his/her motor skills. Finally, potential areas for future research are discussed, that address how a rehabilitation robotic framework may include multisensing data, to improve the user’s recovery process.Keywords: Human-robot interaction, kinect, kinematics, dynamics, haptic control, rehabilitation robotics, artificial intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1329222 Design and Validation of an Aerodynamic Model of the Cessna Citation X Horizontal Stabilizer Using both OpenVSP and Digital Datcom
Authors: Marine Segui, Matthieu Mantilla, Ruxandra Mihaela Botez
Abstract:
This research is the part of a major project at the Research Laboratory in Active Controls, Avionics and Aeroservoelasticity (LARCASE) aiming to improve a Cessna Citation X aircraft cruise performance with an application of the morphing wing technology on its horizontal tail. However, the horizontal stabilizer of the Cessna Citation X turns around its span axis with an angle between -8 and 2 degrees. Within this range, the horizontal stabilizer generates certainly some unwanted drag. To cancel this drag, the LARCASE proposes to trim the aircraft with a horizontal stabilizer equipped by a morphing wing technology. This technology aims to optimize aerodynamic performances by changing the conventional horizontal tail shape during the flight. As a consequence, this technology will be able to generate enough lift on the horizontal tail to balance the aircraft without an unwanted drag generation. To conduct this project, an accurate aerodynamic model of the horizontal tail is firstly required. This aerodynamic model will finally allow precise comparison between a conventional horizontal tail and a morphed horizontal tail results. This paper presents how this aerodynamic model was designed. In this way, it shows how the 2D geometry of the horizontal tail was collected and how the unknown airfoil’s shape of the horizontal tail has been recovered. Finally, the complete horizontal tail airfoil shape was found and a comparison between aerodynamic polar of the real horizontal tail and the horizontal tail found in this paper shows a maximum difference of 0.04 on the lift or the drag coefficient which is very good. Aerodynamic polar data of the aircraft horizontal tail are obtained from the CAE Inc. level D research aircraft flight simulator of the Cessna Citation X.
Keywords: Aerodynamic, Cessna, Citation X, coefficient, Datcom, drag, lift, longitudinal, model, OpenVSP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519221 Minimization of Non-Productive Time during 2.5D Milling
Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna
Abstract:
In the modern manufacturing systems, the use of thermal cutting techniques using oxyfuel, plasma and laser have become indispensable for the shape forming of high quality complex components; however, the conventional chip removal production techniques still have its widespread space in the manufacturing industry. Both these types of machining operations require the positioning of end effector tool at the edge where the cutting process commences. This repositioning of the cutting tool in every machining operation is repeated several times and is termed as non-productive time or airtime motion. Minimization of this non-productive machining time plays an important role in mass production with high speed machining. As, the tool moves from one region to the other by rapid movement and visits a meticulous region once in the whole operation, hence the non-productive time can be minimized by synchronizing the tool movements. In this work, this problem is being formulated as a general travelling salesman problem (TSP) and a genetic algorithm approach has been applied to solve the same. For improving the efficiency of the algorithm, the GA has been hybridized with a noble special heuristic and simulating annealing (SA). In the present work a novel heuristic in the combination of GA has been developed for synchronization of toolpath movements during repositioning of the tool. A comparative analysis of new Meta heuristic techniques with simple genetic algorithm has been performed. The proposed metaheuristic approach shows better performance than simple genetic algorithm for minimization of nonproductive toolpath length. Also, the results obtained with the help of hybrid simulated annealing genetic algorithm (HSAGA) are also found better than the results using simple genetic algorithm only.
Keywords: Non-productive time, Airtime, 2.5 D milling, Laser cutting, Metaheuristic, Genetic Algorithm, Simulated Annealing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2741220 Induced Affectivity and Impact on Creativity: Personal Growth and Perceived Adjustment when Narrating an Intense Emotional Experience
Authors: S. Da Costa, D. Páez, F. Sánchez
Abstract:
We examine the causal role of positive affect on creativity, the association of creativity or innovation in the ideation phase with functional emotional regulation, successful adjustment to stress and dispositional emotional creativity, as well as the predictive role of creativity for positive emotions and social adjustment. The study examines the effects of modification of positive affect on creativity. Participants write three poems, narrate an infatuation episode, answer a scale of personal growth after this episode and perform a creativity task, answer a flow scale after creativity task and fill a dispositional emotional creativity scale. High and low positive effect was induced by asking subjects to write three poems about high and low positive connotation stimuli. In a neutral condition, tasks were performed without previous affect induction. Subjects on the condition of high positive affect report more positive and less negative emotions, more personal growth (effect size r = .24) and their last poem was rated as more original by judges (effect size r = .33). Mediational analysis showed that positive emotions explain the influence of the manipulation on personal growth - positive affect correlates r = .33 to personal growth. The emotional creativity scale correlated to creativity scores of the creative task (r = .14), to the creativity of the narration of the infatuation episode (r = .21). Emotional creativity was also associated, during performing the creativity task, with flow (r = .27) and with affect balance (r = .26). The mediational analysis showed that emotional creativity predicts flow through positive affect. Results suggest that innovation in the phase of ideation is associated with a positive affect balance and satisfactory performance, as well as dispositional emotional creativity is adaptive.
Keywords: Affectivity, creativity, induction, innovation, psychological factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 618