Search results for: imperialist competition algorithm
1527 The Role of the Coach in Elite Equestrian Sport
Authors: Victoria Lewis, L. Dumbell
Abstract:
The British Equestrian Federation (BEF) aims to develop a holistic coach education and certification program, moving away from traditional autocratic instruction in line with the UK Coaching Framework. This framework is based on generic coaching science research where the coach is cited as a pivotal aspect in developing sporting success. Theoretic knowledge suggests that the role of the sports coach is to develop the physical, tactical, technical and psychological attributes of the athlete and is responsible for the planning, organization and delivery of the training plan and competition schedule. However, to the best of the author’s knowledge, there is no empirical evidence to suggest that is the role required in equestrian sport as the rider takes responsibility for many of these tasks. This research aimed to address the void in current knowledge by gaining an understanding of coaching in equestrian sport in order to improve coaching education system through awareness of the role of the coach. Objectives were to examine the relationship between coach and rider at elite level in equestrian sport providing empirical evidence to suggest that the rider is, in part, ‘self –coached’. To identify the elite equestrian coaches’ role in coaching these ‘self-coached riders. A qualitative method using semi-structured interviews was used. A sample of elite coaches (N=3) and elite riders (N=3) were interviewed. Analysis of the transcripts revealed a total of 534 meaning units that were further grouped into sub-themes and general themes from the coaches’ perspective and the riders’ perspective. This led to the development of a final thematic structure revealing major dimensions that characterized coaching in elite equestrian sport. It was found that the riders at the elite level coach themselves the majority of the time, therefore, can be considered as ‘self-coached’ athletes. However, they do use elite coaches in a mentoring and consultancy role, where they seek guidance from the coach on specific problems, to sound ideas off or to seek reassurance that what they are doing is correct. Findings from this research suggest that the rider-coach relationship at the elite level is a professional one, based on trust and respect, but not a close relationship as seen in other sports. The results show the imperative need for the BEF to educate coaches in coaching the self-coached rider at the elite level, particularly in terms of mentoring skills. As well as incorporating rider education aimed at developing the independent, self-coached riders.Keywords: coaching, elite sport, equestrian, self coached
Procedia PDF Downloads 1711526 An Efficient Acquisition Algorithm for Long Pseudo-Random Sequence
Authors: Wan-Hsin Hsieh, Chieh-Fu Chang, Ming-Seng Kao
Abstract:
In this paper, a novel method termed the Phase Coherence Acquisition (PCA) is proposed for pseudo-random (PN) sequence acquisition. By employing complex phasors, the PCA requires only complex additions in the order of N, the length of the sequence, whereas the conventional method utilizing fast Fourier transform (FFT) requires complex multiplications and additions both in the order of Nlog2N . In order to combat noise, the input and local sequences are partitioned and mapped into complex phasors in PCA. The phase differences between pairs of input and local phasors are utilized for acquisition, and thus complex multiplications are avoided. For more noise-robustness capability, the multi-layer PCA is developed to extract the code phase step by step. The significant reduction of computational loads makes the PCA an attractive method, especially when the sequence length of is extremely large which becomes intractable for the FFT-based acquisition.Keywords: FFT, PCA, PN sequence, convolution theory
Procedia PDF Downloads 4781525 Kalman Filter Gain Elimination in Linear Estimation
Authors: Nicholas D. Assimakis
Abstract:
In linear estimation, the traditional Kalman filter uses the Kalman filter gain in order to produce estimation and prediction of the n-dimensional state vector using the m-dimensional measurement vector. The computation of the Kalman filter gain requires the inversion of an m x m matrix in every iteration. In this paper, a variation of the Kalman filter eliminating the Kalman filter gain is proposed. In the time varying case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix and the inversion of an m x m matrix in every iteration. In the time invariant case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix in every iteration. The proposed Kalman filter gain elimination algorithm may be faster than the conventional Kalman filter, depending on the model dimensions.Keywords: discrete time, estimation, Kalman filter, Kalman filter gain
Procedia PDF Downloads 1951524 Separate Powers Control Structure of DFIG Based on Fractional Regulator Fed by Multilevel Inverters DC Bus Voltages of a photovoltaic System
Authors: S. Ghoudelbourk, A. Omeiri, D. Dib, H. Cheghib
Abstract:
This paper shows that we can improve the performance of the auto-adjustable electric machines if a fractional dynamic is considered in the algorithm of the controlling order. This structure is particularly interested in the separate control of active and reactive power of the double-fed induction generator (DFIG) of wind power conversion chain. Fractional regulators are used in the regulation of chain of powers. Knowing that, usually, the source of DFIG is provided by converters through controlled rectifiers, all this system makes the currents of lines strongly polluted that can have a harmful effect for the connected loads and sensitive equipment nearby. The solution to overcome these problems is to replace the power of the rotor DFIG by multilevel inverters supplied by PV which improve the THD. The structure of the adopted adjustment is tested using Matlab/Simulink and the results are presented and analyzed for a variable wind.Keywords: DFIG, fractional regulator, multilevel inverters, PV
Procedia PDF Downloads 4011523 Selenuranes as Cysteine Protease Inhibitors: Theorical Investigation on Model Systems
Authors: Gabriela D. Silva, Rodrigo L. O. R. Cunha, Mauricio D. Coutinho-Neto
Abstract:
In the last four decades the biological activities of selenium compounds has received great attention, particularly for hypervalent derivates from selenium (IV) used as enzyme inhibitors. The unregulated activity of cysteine proteases are related to the development of several pathologies, such as neurological disorders, cardiovascular diseases, obesity, rheumatoid arthritis, cancer and parasitic infections. These enzymes are therefore a valuable target for designing new small molecule inhibitors such as selenuranes. Even tough there has been advances in the synthesis and design of new selenuranes based inhibitors, little is known about their mechanism of action. It is a given that inhibition occurs through the reaction between the thiol group of the enzyme and the chalcogen atom. However, several open questions remain about the nature of the mechanism (associative vs. dissociative) and about the nature of the reactive species in solution under physiological conditions. In this work we performed a theoretical investigation on model systems to study the possible routes of substitution reactions. Nucleophiles may be present in biological systems, our interest is centered in the thiol groups from the cysteine proteases and the hydroxyls from the aqueous environment. We therefore expect this study to clarify the possibility of a route reaction in two stages, the first consisting of the substitution of chloro atoms by hydroxyl groups and then replacing these hydroxyl groups per thiol groups in selenuranes. The structures of selenuranes and nucleophiles were optimized using density function theory along the B3LYP functional and a 6-311+G(d) basis set. Solvent was treated using the IEFPCM method as implemented in the Gaussian 09 code. Our results indicate that hydrolysis from water react preferably with selenuranes, and then, they are replaced by the thiol group. It show the energy values of -106,0730423 kcal/mol for dople substituition by hydroxyl group and 96,63078511 kcal/mol for thiol group. The solvatation and pH reduction promotes this route, increasing the energy value for reaction with hydroxil group to -50,75637672 kcal/mol and decreasing the energy value for thiol to 7,917767189 kcal/mol. Alternative ways were analyzed for monosubstitution (considering the competition between Cl, OH and SH groups) and they suggest the same route. Similar results were obtained for aliphatic and aromatic selenuranes studied.Keywords: chalcogenes, computational study, cysteine proteases, enzyme inhibitors
Procedia PDF Downloads 3021522 A Topological Study of an Urban Street Network and Its Use in Heritage Areas
Authors: Jose L. Oliver, Taras Agryzkov, Leandro Tortosa, Jose F. Vicent, Javier Santacruz
Abstract:
This paper aims to demonstrate how a topological study of an urban street network can be used as a tool to be applied to some heritage conservation areas in a city. In the last decades, we find different kinds of approaches in the discipline of Architecture and Urbanism based in the so-called Sciences of Complexity. In this context, this paper uses mathematics from the Network Theory. Hence, it proposes a methodology based in obtaining information from a graph, which is created from a network of urban streets. Then, it is used an algorithm that establishes a ranking of importance of the nodes of that network, from its topological point of view. The results are applied to a heritage area in a particular city, confronting the data obtained from the mathematical model, with the ones from the field work in the case study. As a result of this process, we may conclude the necessity of implementing some actions in the area, and where those actions would be more effective for the whole heritage site.Keywords: graphs, heritage cities, spatial analysis, urban networks
Procedia PDF Downloads 3961521 The Design, Control and Dynamic Performance of an Interior Permanent Magnet Synchronous Generator for Wind Power System
Authors: Olusegun Solomon
Abstract:
This paper describes the concept for the design and maximum power point tracking control for an interior permanent magnet synchronous generator wind turbine system. Two design concepts are compared to outline the effect of magnet design on the performance of the interior permanent magnet synchronous generator. An approximate model that includes the effect of core losses has been developed for the machine to simulate the dynamic performance of the wind energy system. An algorithm for Maximum Power Point Tracking control is included to describe the process for maximum power extraction.Keywords: permanent magnet synchronous generator, wind power system, wind turbine
Procedia PDF Downloads 2211520 A Parallel Poromechanics Finite Element Method (FEM) Model for Reservoir Analyses
Authors: Henrique C. C. Andrade, Ana Beatriz C. G. Silva, Fernando Luiz B. Ribeiro, Samir Maghous, Jose Claudio F. Telles, Eduardo M. R. Fairbairn
Abstract:
The present paper aims at developing a parallel computational model for numerical simulation of poromechanics analyses of heterogeneous reservoirs. In the context of macroscopic poroelastoplasticity, the hydromechanical coupling between the skeleton deformation and the fluid pressure is addressed by means of two constitutive equations. The first state equation relates the stress to skeleton strain and pore pressure, while the second state equation relates the Lagrangian porosity change to skeleton volume strain and pore pressure. A specific algorithm for local plastic integration using a tangent operator is devised. A modified Cam-clay type yield surface with associated plastic flow rule is adopted to account for both contractive and dilative behavior.Keywords: finite element method, poromechanics, poroplasticity, reservoir analysis
Procedia PDF Downloads 3911519 Development of Generalized Correlation for Liquid Thermal Conductivity of N-Alkane and Olefin
Authors: A. Ishag Mohamed, A. A. Rabah
Abstract:
The objective of this research is to develop a generalized correlation for the prediction of thermal conductivity of n-Alkanes and Alkenes. There is a minority of research and lack of correlation for thermal conductivity of liquids in the open literature. The available experimental data are collected covering the groups of n-Alkanes and Alkenes.The data were assumed to correlate to temperature using Filippov correlation. Nonparametric regression of Grace Algorithm was used to develop the generalized correlation model. A spread sheet program based on Microsoft Excel was used to plot and calculate the value of the coefficients. The results obtained were compared with the data that found in Perry's Chemical Engineering Hand Book. The experimental data correlated to the temperature ranged "between" 273.15 to 673.15 K, with R2 = 0.99.The developed correlation reproduced experimental data that which were not included in regression with absolute average percent deviation (AAPD) of less than 7 %. Thus the spread sheet was quite accurate which produces reliable data.Keywords: N-Alkanes, N-Alkenes, nonparametric, regression
Procedia PDF Downloads 6541518 Lock in, Lock Out: A Double Lens Analysis of Local Media Paywall Strategies and User Response
Authors: Mona Solvoll, Ragnhild Kr. Olsen
Abstract:
Background and significance of the study: Newspapers are going through radical changes with increased competition, eroding readerships and declining advertising resulting in plummeting overall revenues. This has lead to a quest for new business models, focusing on monetizing content. This research paper investigates both how local online newspapers have introduced user payment and how the audience has received these changes. Given the role of local media in keeping their communities informed and those in power accountable, their potential impact on civic engagement and cultural integration in local communities, the business model innovations of local media deserves far more research interest. Empirically, the findings are interesting for local journalists, local media managers as well as local advertisers. Basic methodologies: The study is based on interviews with commercial leaders in 20 Norwegian local newspapers in addition to a national survey data from 1600 respondents among local media users. The interviews were conducted in the second half of 2015, while the survey was conducted in September 2016. Theoretically, the study draws on the business model framework. Findings: The analysis indicates that paywalls aim more at reducing digital cannibalisation of print revenue than about creating new digital income. The newspapers are mostly concerned with retaining “old” print subscribers and transform them into digital subscribers. However, this strategy may come at a high price for newspapers if their defensive print strategy drives away younger digital readership and hamper their recruitment potential for new audiences as some previous studies have indicated. Analysis of young reader news habits indicates that attracting the younger audience to traditional local news providers is particularly challenging and that they are more prone to seek alternative news sources than the older audience is. Conclusion: The paywall strategy applied by the local newspapers may be well fitted to stabilise print subscription figures and facilitate more tailored and better services for already existing customers, but far less suited for attracting new ones. The paywall is a short-sighted strategy, which drives away younger readers and paves the road for substitute offerings, particularly Facebook.Keywords: business model, newspapers, paywall, user payment
Procedia PDF Downloads 2771517 Encephalon-An Implementation of a Handwritten Mathematical Expression Solver
Authors: Shreeyam, Ranjan Kumar Sah, Shivangi
Abstract:
Recognizing and solving handwritten mathematical expressions can be a challenging task, particularly when certain characters are segmented and classified. This project proposes a solution that uses Convolutional Neural Network (CNN) and image processing techniques to accurately solve various types of equations, including arithmetic, quadratic, and trigonometric equations, as well as logical operations like logical AND, OR, NOT, NAND, XOR, and NOR. The proposed solution also provides a graphical solution, allowing users to visualize equations and their solutions. In addition to equation solving, the platform, called CNNCalc, offers a comprehensive learning experience for students. It provides educational content, a quiz platform, and a coding platform for practicing programming skills in different languages like C, Python, and Java. This all-in-one solution makes the learning process engaging and enjoyable for students. The proposed methodology includes horizontal compact projection analysis and survey for segmentation and binarization, as well as connected component analysis and integrated connected component analysis for character classification. The compact projection algorithm compresses the horizontal projections to remove noise and obtain a clearer image, contributing to the accuracy of character segmentation. Experimental results demonstrate the effectiveness of the proposed solution in solving a wide range of mathematical equations. CNNCalc provides a powerful and user-friendly platform for solving equations, learning, and practicing programming skills. With its comprehensive features and accurate results, CNNCalc is poised to revolutionize the way students learn and solve mathematical equations. The platform utilizes a custom-designed Convolutional Neural Network (CNN) with image processing techniques to accurately recognize and classify symbols within handwritten equations. The compact projection algorithm effectively removes noise from horizontal projections, leading to clearer images and improved character segmentation. Experimental results demonstrate the accuracy and effectiveness of the proposed solution in solving a wide range of equations, including arithmetic, quadratic, trigonometric, and logical operations. CNNCalc features a user-friendly interface with a graphical representation of equations being solved, making it an interactive and engaging learning experience for users. The platform also includes tutorials, testing capabilities, and programming features in languages such as C, Python, and Java. Users can track their progress and work towards improving their skills. CNNCalc is poised to revolutionize the way students learn and solve mathematical equations with its comprehensive features and accurate results.Keywords: AL, ML, hand written equation solver, maths, computer, CNNCalc, convolutional neural networks
Procedia PDF Downloads 1221516 Nonlinear Model Predictive Control of Water Quality in Drinking Water Distribution Systems with DBPs Objetives
Authors: Mingyu Xie, Mietek Brdys
Abstract:
The paper develops a non-linear model predictive control (NMPC) of water quality in drinking water distribution systems (DWDS) based on the advanced non-linear quality dynamics model including disinfections by-products (DBPs). A special attention is paid to the analysis of an impact of the flow trajectories prescribed by an upper control level of the recently developed two-time scale architecture of an integrated quality and quantity control in DWDS. The new quality controller is to operate within this architecture in the fast time scale as the lower level quality controller. The controller performance is validated by a comprehensive simulation study based on an example case study DWDS.Keywords: model predictive control, hierarchical control structure, genetic algorithm, water quality with DBPs objectives
Procedia PDF Downloads 3171515 Control of a Stewart Platform for Minimizing Impact Energy in Simulating Spacecraft Docking Operations
Authors: Leonardo Herrera, Shield B. Lin, Stephen J. Montgomery-Smith, Ziraguen O. Williams
Abstract:
Three control algorithms: Proportional-Integral-Derivative, Linear-Quadratic-Gaussian, and Linear-Quadratic-Gaussian with the shift, were applied to the computer simulation of a one-directional dynamic model of a Stewart Platform. The goal was to compare the dynamic system responses under the three control algorithms and to minimize the impact energy when simulating spacecraft docking operations. Equations were derived for the control algorithms and the input and output of the feedback control system. Using MATLAB, Simulink diagrams were created to represent the three control schemes. A switch selector was used for the convenience of changing among different controllers. The simulation demonstrated the controller using the algorithm of Linear-Quadratic-Gaussian with the shift resulting in the lowest impact energy.Keywords: controller, Stewart platform, docking operation, spacecraft
Procedia PDF Downloads 511514 Promotional Mix as a Determinant of Consumer Buying Decision in the Food and Beverages Industry: A Case Study of Nigeria Bottling Company Plc., Asejire Ibadan
Authors: Adedeji S. Adegoke, Olakunle N. Popoola
Abstract:
Promotion is indispensible and inestimable property of marketing through which different organizations persuade their prospective customers. The idea of passing information about a product to the consumer at outside the world is known as promotional activities. A study was determined whether there was relationship between promotional mix and consumer buying decision, that is may be customers were influenced by promotion. It was investigated to determine whether promotion can be used to influence competitors’ activities in the market and also research was conducted to determine if there was any problem encountered by Nigeria bottling company plc, in promoting its beverages products. The various forms of promotional mix available for an organization were examined and recommended the appropriate promotional mix that company can adopt to boost the company sales. The research design was depended on the primary and secondary data. The primary data were information collected from the subjects using methods of data collection, that is through the use of questionnaire, interview, direct observation, etc. The secondary data consist of information that already exists having been collected for another purpose by some researchers. These include internal and external sources. The questionnaire was designed and administered to the staff of production and marketing department of Nigeria bottling company plc., which served as the population of this study, out of which sample was drawn randomly from the population, using sample random technique. It was deduced that 90% of the respondents opined that advertising influenced competition in the market and that there was a good sale after they started advert while 10% of them were not sure. At advertising level, 85% of the respondents chose 81-100% as the increase in the percentage recorded in their sales level, while 10% of them agreed that increase in the percentage recorded in their sales was within 61-80% and 5% of them chose 45-60% as the percentage increase in their sales record. Due to unstable economic condition of the Nigeria, many business organizations adopted the promotional strategies. Apart from advertising, it was discovered through research that sales promotion served as an incentive to consumers of Nigeria bottling company plc at a time offer gifts and prizes to consumers which drastically increased their level of sales. Since advertising and sales promotion increased the level of sales, more money should be allocated for this purpose to maintain market share and thereby increase profit.Keywords: consumer, marketing, organization, promotional mix
Procedia PDF Downloads 1621513 Narratives in Science as Covert Prestige Indicators
Authors: Zinaida Shelkovnikova
Abstract:
The language in science is changing and meets the demands of the society. We shall argue that in the varied modern world there are important reasons for the integration of narratives into scientific discourse. As far as nowadays scientists are faced with extremely prompt science development and progress; modern scientific society lives in the conditions of tough competition. The integration of narratives into scientific discourse is thus a good way to prompt scientific experience to different audiences and to express covert prestige of the discourse. Narratives also form the identity of the persuasive narrator. Using the narrative approach to the scientific discourse analysis we reveal the sociocultural diversity of the scientists. If you want to attract audience’s attention to your scientific research, narratives should be integrated into your scientific discourse. Those who understand this consistent pattern are considered the leading scientists. Taking into account that it is prestigious to be renowned, celebrated in science, it is a covert prestige to write narratives in science. We define a science narrative as the intentional, consequent, coherent, event discourse or a discourse fragment, which contains the author creativity, in some cases intrigue, and gives mostly qualitative information (compared with quantitative data) in order to provide maximum understanding of the research. Science narratives also allow the effective argumentation and consequently construct the identity of the persuasive narrator. However, skills of creating appropriate scientific discourse reflect the level of prestige. In order to teach postgraduate students to be successful in English scientific writing and to be prestigious in the scientific society, we have defined the science narrative and outlined its main features and characteristics. Narratives contribute to audience’s involvement with the narrator and his/her narration. In general, the way in which a narrative is performed may result in (limited or greater) contact with the audience. To gain these aim authors use emotional fictional elements; descriptive elements: adjectives; adverbs; comparisons and so on; author’s evaluative elements. Thus, the features of science narrativity are the following: descriptive tools; authors evaluation; qualitative information exceeds the quantitative data; facts take the event status; understandability; accessibility; creativity; logics; intrigue; esthetic nature; fiction. To conclude, narratives function covert prestige of the scientific discourse and shape the identity of the persuasive scientist.Keywords: covert prestige, narrativity, scientific discourse, scientific narrative
Procedia PDF Downloads 3991512 A New Design Methodology for Partially Reconfigurable Systems-on-Chip
Authors: Roukaya Dalbouchi, Abdelkrin Zitouni
Abstract:
In this paper, we propose a novel design methodology for Dynamic Partial Reconfigurable (DPR) system. This type of system has the property of being able to be modified after its design and during its execution. The suggested design methodology is generic in terms of granularity, number of modules, and reconfigurable region and suitable for any type of modern application. It is based on the interconnection between several design stages. The recommended methodology represents a guide for the design of DPR architectures that meet compromise reconfiguration/performance. To validate the proposed methodology, we use as an application a video watermarking. The comparison result shows that the proposed methodology supports all stages of DPR architecture design and characterized by a high abstraction level. It provides a dynamic/partial reconfigurable architecture; it guarantees material efficiency, the flexibility of reconfiguration, and superior performance in terms of frequency and power consumption.Keywords: dynamically reconfigurable system, block matching algorithm, partial reconfiguration, motion vectors, video watermarking
Procedia PDF Downloads 951511 Binarization and Recognition of Characters from Historical Degraded Documents
Authors: Bency Jacob, S.B. Waykar
Abstract:
Degradations in historical document images appear due to aging of the documents. It is very difficult to understand and retrieve text from badly degraded documents as there is variation between the document foreground and background. Thresholding of such document images either result in broken characters or detection of false texts. Numerous algorithms exist that can separate text and background efficiently in the textual regions of the document; but portions of background are mistaken as text in areas that hardly contain any text. This paper presents a way to overcome these problems by a robust binarization technique that recovers the text from a severely degraded document images and thereby increases the accuracy of optical character recognition systems. The proposed document recovery algorithm efficiently removes degradations from document images. Here we are using the ostus method ,local thresholding and global thresholding and after the binarization training and recognizing the characters in the degraded documents.Keywords: binarization, denoising, global thresholding, local thresholding, thresholding
Procedia PDF Downloads 3441510 Collaborative and Experimental Cultures in Virtual Reality Journalism: From the Perspective of Content Creators
Authors: Radwa Mabrook
Abstract:
Virtual Reality (VR) content creation is a complex and an expensive process, which requires multi-disciplinary teams of content creators. Grant schemes from technology companies help media organisations to explore the VR potential in journalism and factual storytelling. Media organisations try to do as much as they can in-house, but they may outsource due to time constraints and skill availability. Journalists, game developers, sound designers and creative artists work together and bring in new cultures of work. This study explores the collaborative experimental nature of VR content creation, through tracing every actor involved in the process and examining their perceptions of the VR work. The study builds on Actor Network Theory (ANT), which decomposes phenomena into their basic elements and traces the interrelations among them. Therefore, the researcher conducted 22 semi-structured interviews with VR content creators between November 2017 and April 2018. Purposive and snowball sampling techniques allowed the researcher to recruit fact-based VR content creators from production studios and media organisations, as well as freelancers. Interviews lasted up to three hours, and they were a mix of Skype calls and in-person interviews. Participants consented for their interviews to be recorded, and for their names to be revealed in the study. The researcher coded interviews’ transcripts in Nvivo software, looking for key themes that correspond with the research questions. The study revealed that VR content creators must be adaptive to change, open to learn and comfortable with mistakes. The VR content creation process is very iterative because VR has no established work flow or visual grammar. Multi-disciplinary VR team members often speak different languages making it hard to communicate. However, adaptive content creators perceive VR work as a fun experience and an opportunity to learn. The traditional sense of competition and the strive for information exclusivity are now replaced by a strong drive for knowledge sharing. VR content creators are open to share their methods of work and their experiences. They target to build a collaborative network that aims to harness VR technology for journalism and factual storytelling. Indeed, VR is instilling collaborative and experimental cultures in journalism.Keywords: collaborative culture, content creation, experimental culture, virtual reality
Procedia PDF Downloads 1271509 Comparative Methods for Speech Enhancement and the Effects on Text-Independent Speaker Identification Performance
Authors: R. Ajgou, S. Sbaa, S. Ghendir, A. Chemsa, A. Taleb-Ahmed
Abstract:
The speech enhancement algorithm is to improve speech quality. In this paper, we review some speech enhancement methods and we evaluated their performance based on Perceptual Evaluation of Speech Quality scores (PESQ, ITU-T P.862). All method was evaluated in presence of different kind of noise using TIMIT database and NOIZEUS noisy speech corpus.. The noise was taken from the AURORA database and includes suburban train noise, babble, car, exhibition hall, restaurant, street, airport and train station noise. Simulation results showed improved performance of speech enhancement for Tracking of non-stationary noise approach in comparison with various methods in terms of PESQ measure. Moreover, we have evaluated the effects of the speech enhancement technique on Speaker Identification system based on autoregressive (AR) model and Mel-frequency Cepstral coefficients (MFCC).Keywords: speech enhancement, pesq, speaker recognition, MFCC
Procedia PDF Downloads 4241508 A Deep Learning Based Integrated Model For Spatial Flood Prediction
Authors: Vinayaka Gude Divya Sampath
Abstract:
The research introduces an integrated prediction model to assess the susceptibility of roads in a future flooding event. The model consists of deep learning algorithm for forecasting gauge height data and Flood Inundation Mapper (FIM) for spatial flooding. An optimal architecture for Long short-term memory network (LSTM) was identified for the gauge located on Tangipahoa River at Robert, LA. Dropout was applied to the model to evaluate the uncertainty associated with the predictions. The estimates are then used along with FIM to identify the spatial flooding. Further geoprocessing in ArcGIS provides the susceptibility values for different roads. The model was validated based on the devastating flood of August 2016. The paper discusses the challenges for generalization the methodology for other locations and also for various types of flooding. The developed model can be used by the transportation department and other emergency response organizations for effective disaster management.Keywords: deep learning, disaster management, flood prediction, urban flooding
Procedia PDF Downloads 1461507 Energy Efficient Routing Protocol with Ad Hoc On-Demand Distance Vector for MANET
Authors: K. Thamizhmaran, Akshaya Devi Arivazhagan, M. Anitha
Abstract:
On the case of most important systematic issue that must need to be solved in means of implementing a data transmission algorithm on the source of Mobile adhoc networks (MANETs). That is, how to save mobile nodes energy on meeting the requirements of applications or users as the mobile nodes are with battery limited. On while satisfying the energy saving requirement, hence it is also necessary of need to achieve the quality of service. In case of emergency work, it is necessary to deliver the data on mean time. Achieving quality of service in MANETs is also important on while. In order to achieve this requirement, Hence, we further implement the Energy-Aware routing protocol for system of Mobile adhoc networks were it being proposed, that on which saves the energy as on every node by means of efficiently selecting the mode of energy efficient path in the routing process by means of Enhanced AODV routing protocol.Keywords: Ad-Hoc networks, MANET, routing, AODV, EAODV
Procedia PDF Downloads 3701506 Bipolar Impulse Noise Removal and Edge Preservation in Color Images and Video Using Improved Kuwahara Filter
Authors: Reji Thankachan, Varsha PS
Abstract:
Both image capturing devices and human visual systems are nonlinear. Hence nonlinear filtering methods outperforms its linear counterpart in many applications. Linear methods are unable to remove impulsive noise in images by preserving its edges and fine details. In addition, linear algorithms are unable to remove signal dependent or multiplicative noise in images. This paper presents an approach to denoise and smoothen the Bipolar impulse noised images and videos using improved Kuwahara filter. It involves a 2 stage algorithm which includes a noise detection followed by filtering. Numerous simulation demonstrate that proposed method outperforms the existing method by eliminating the painting like flattening effect along the local feature direction while preserving edge with improvement in PSNR and MSE.Keywords: bipolar impulse noise, Kuwahara, PSNR MSE, PDF
Procedia PDF Downloads 4981505 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries
Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni
Abstract:
In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm
Procedia PDF Downloads 1171504 Coarse-Graining in Micromagnetic Simulations of Magnetic Hyperthermia
Authors: Razyeh Behbahani, Martin L. Plumer, Ivan Saika-Voivod
Abstract:
Micromagnetic simulations based on the stochastic Landau-Lifshitz-Gilbert equation are used to calculate dynamic magnetic hysteresis loops relevant to magnetic hyperthermia applications. With the goal to effectively simulate room-temperature loops for large iron-oxide based systems at relatively slow sweep rates on the order of 1 Oe/ns or less, a coarse-graining scheme is proposed and tested. The scheme is derived from a previously developed renormalization-group approach. Loops associated with nanorods, used as building blocks for larger nanoparticles that were employed in preclinical trials (Dennis et al., 2009 Nanotechnology 20 395103), serve as the model test system. The scaling algorithm is shown to produce nearly identical loops over several decades in the model grain sizes. Sweep-rate scaling involving the damping constant alpha is also demonstrated.Keywords: coarse-graining, hyperthermia, hysteresis loops, micromagnetic simulations
Procedia PDF Downloads 1481503 Computational Team Dynamics and Interaction Patterns in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams
Procedia PDF Downloads 701502 Ankle Fracture Management: A Unique Cross Departmental Quality Improvement Project
Authors: Langhit Kurar, Loren Charles
Abstract:
Introduction: In light of recent BOAST 12 (August 2016) published guidance on management of ankle fractures, the project aimed to highlight key discrepancies throughout the care trajectory from admission to point of discharge at a district general hospital. Wide breadth of data covering three key domains: accident and emergency, radiology, and orthopaedic surgery were subsequently stratified and recommendations on note documentation, and outpatient follow up were made. Methods: A retrospective twelve month audit was conducted reviewing results of ankle fracture management in 37 patients. Inclusion criterion involved all patients seen at Darent Valley Hospital (DVH) emergency department with radiographic evidence of an ankle fracture. Exclusion criterion involved all patients managed solely by nursing staff or having sustained purely ligamentous injury. Medical notes, including discharge summaries and the PACS online radiographic tool were used for data extraction. Results: Cross-examination of the A & E domain revealed limited awareness of the BOAST 12 recent publication including requirements to document skin integrity and neurovascular assessment. This had direct implications as this would have changed the surgical plan for acutely compromised patients. The majority of results obtained from the radiographic domain were satisfactory with appropriate X-rays taken in over 95% of cases. However, due to time pressures within A & E, patients were often left without a post manipulation XRAY in a backslab. Poorly reduced fractures were subsequently left for a long period resulting in swollen ankles and a time-dependent lag to surgical intervention. This had knocked on implications for prolonged inpatient stay resulting in hospital-acquired co-morbidity including pressure sores. Discussion: The audit has highlighted several areas of improvement throughout the disease trajectory from review in the emergency department to follow up as an outpatient. This has prompted the creation of an algorithm to ensure patients with significant fractures presenting to the emergency department are seen promptly and treatment expedited as per recent guidance. This includes timing for X-rays taken in A & E. Re-audit has shown significant improvement in both documentation at time of presentation and appropriate follow-up strategies. Within the orthopedic domain, we are in the process of creating an ankle fracture pathway to ensure imaging and weight bearing status are made clear to the consulting clinicians in an outpatient setting. Significance/Clinical Relevance: As a result of the ankle fracture algorithm we have adapted the BOAST 12 guidance to shape an intrinsic pathway to not only improve patient management within the emergency department but also create a standardised format for follow up.Keywords: ankle, fracture, BOAST, radiology
Procedia PDF Downloads 1801501 Competitivity in Procurement Multi-Unit Discrete Clock Auctions: An Experimental Investigation
Authors: Despina Yiakoumi, Agathe Rouaix
Abstract:
Laboratory experiments were run to investigate the impact of different design characteristics of the auctions, which have been implemented to procure capacity in the UK’s reformed electricity markets. The experiment studies competition among bidders in procurement multi-unit discrete descending clock auctions under different feedback policies and pricing rules. Theory indicates that feedback policy in combination with the two common pricing rules; last-accepted bid (LAB) and first-rejected bid (FRB), could affect significantly the auction outcome. Two information feedback policies regarding the bidding prices of the participants are considered; with feedback and without feedback. With feedback, after each round participants are informed of the number of items still in the auction and without feedback, after each round participants have no information about the aggregate supply. Under LAB, winning bidders receive the amount of the highest successful bid and under the FRB the winning bidders receive the lowest unsuccessful bid. Based on the theoretical predictions of the alternative auction designs, it was decided to run three treatments. First treatment considers LAB with feedback; second treatment studies LAB without feedback; third treatment investigates FRB without feedback. Theoretical predictions of the game showed that under FRB, the alternative feedback policies are indifferent to the auction outcome. Preliminary results indicate that LAB with feedback and FRB without feedback achieve on average higher clearing prices in comparison to the LAB treatment without feedback. However, the clearing prices under LAB with feedback and FRB without feedback are on average lower compared to the theoretical predictions. Although under LAB without feedback theory predicts the clearing price will drop to the competitive equilibrium, experimental results indicate that participants could still engage in cooperative behavior and drive up the price of the auction. It is showed, both theoretically and experimentally, that the pricing rules and the feedback policy, affect the bidding competitiveness of the auction by providing opportunities to participants to engage in cooperative behavior and exercise market power. LAB without feedback seems to be less vulnerable to market power opportunities compared to the alternative auction designs. This could be an argument for the use of LAB pricing rule in combination with limited feedback in the UK capacity market in an attempt to improve affordability for consumers.Keywords: descending clock auctions, experiments, feedback policy, market design, multi-unit auctions, pricing rules, procurement auctions
Procedia PDF Downloads 2981500 Developing Alternatives: Citizens Perspectives on Causes and Ramification of Political Conflict in Ivory Coast from 2002 - 2009
Authors: Suaka Yaro
Abstract:
This article provides an alternative examination of the causes and the ramifications of the Ivorian political conflict from 2002 to 2009. The researcher employed a constructivist epistemology and qualitative study based upon fieldwork in different African cities interviewing Ivorians outside and within Ivory Coast. A purposive sampling of fourteen participants was selected. A purposive sampling was used to select fourteen respondents. The respondents were selected based on their involvement in Ivorian conflict. Their experiences on the causes and effects of the conflict were tapped for analysis. Qualitative methodology was used for the study. The data collection instruments were semi-structured interview questions, open-ended semi-structured questionnaire, and documentary analysis. The perceptions of these participants on the causes, effects and the possible solution to the endemic conflict in their homeland hold key perspectives that have hitherto been ignored in the whole debate about the Ivorian political conflict and its legacies. Finally, from the synthesized findings of the investigation, the researcher concluded that the analysed data revealed that the causes of the conflict were competition for scarce resources, bad governance, media incitement, xenophobia, incessant political power struggle and the proliferation of small firearms entering the country. The effects experienced during the conflict were the human rights violation, destruction of property including UN premises and displaced people both internally and externally. Some recommendations made include: Efforts should be made by the government to strengthen good relationship among different ethnic groups and help them adapt to new challenges that confront democratic developments in the country. The government should organise the South African style of Truth and Reconciliation Commission to revisit the horrors of the past in order to heal wounds and prevent future occurrence of the conflict. Employment opportunities and other income generating ventures for Ivorian should be created by the government by attracting local and foreign investors. The numerous rebels should be given special skills training in other for them to be able to live among the communities in Ivory Coast. Government of national unity should be encouraged in situation like this.Keywords: displaced, federalism, pluralism, identity politics, grievance, eligibility, greed
Procedia PDF Downloads 2241499 Adaptive Dehazing Using Fusion Strategy
Authors: M. Ramesh Kanthan, S. Naga Nandini Sujatha
Abstract:
The goal of haze removal algorithms is to enhance and recover details of scene from foggy image. In enhancement the proposed method focus into two main categories: (i) image enhancement based on Adaptive contrast Histogram equalization, and (ii) image edge strengthened Gradient model. Many circumstances accurate haze removal algorithms are needed. The de-fog feature works through a complex algorithm which first determines the fog destiny of the scene, then analyses the obscured image before applying contrast and sharpness adjustments to the video in real-time to produce image the fusion strategy is driven by the intrinsic properties of the original image and is highly dependent on the choice of the inputs and the weights. Then the output haze free image has reconstructed using fusion methodology. In order to increase the accuracy, interpolation method has used in the output reconstruction. A promising retrieval performance is achieved especially in particular examples.Keywords: single image, fusion, dehazing, multi-scale fusion, per-pixel, weight map
Procedia PDF Downloads 4641498 Recognition of Grocery Products in Images Captured by Cellular Phones
Authors: Farshideh Einsele, Hassan Foroosh
Abstract:
In this paper, we present a robust algorithm to recognize extracted text from grocery product images captured by mobile phone cameras. Recognition of such text is challenging since text in grocery product images varies in its size, orientation, style, illumination, and can suffer from perspective distortion. Pre-processing is performed to make the characters scale and rotation invariant. Since text degradations can not be appropriately defined using wellknown geometric transformations such as translation, rotation, affine transformation and shearing, we use the whole character black pixels as our feature vector. Classification is performed with minimum distance classifier using the maximum likelihood criterion, which delivers very promising Character Recognition Rate (CRR) of 89%. We achieve considerably higher Word Recognition Rate (WRR) of 99% when using lower level linguistic knowledge about product words during the recognition process.Keywords: camera-based OCR, feature extraction, document, image processing, grocery products
Procedia PDF Downloads 406