Search results for: Fundamental%20natural%20frequency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 533

Search results for: Fundamental%20natural%20frequency

143 Interactions between Cells and Nanoscale Surfaces of Oxidized Silicon Substrates

Authors: Chung-Yao Yang, Lin-Ya Huang, Tang-Long Shen, J. Andrew Yeh

Abstract:

The importance for manipulating an incorporated scaffold and directing cell behaviors is well appreciated for tissue engineering. Here, we developed newly nano-topographic oxidized silicon nanosponges capable of being various chemical modifications to provide much insight into the fundamental biology of how cells interact with their surrounding environment in vitro. A wet etching technique is exerted to allow us fabricated the silicon nanosponges in a high-throughput manner. Furthermore, various organo-silane chemicals enabled self-assembled on the surfaces by vapor deposition. We have found that Chinese hamster ovary (CHO) cells displayed certain distinguishable morphogenesis, adherent responses, and biochemical properties while cultured on these chemical modified nano-topographic structures in compared with the planar oxidized silicon counterparts, indicating that cell behaviors can be influenced by certain physical characteristic derived from nano-topography in addition to the hydrophobicity of contact surfaces crucial for cell adhesion and spreading. Of particular, there were predominant nano-actin punches and slender protrusions formed while cells were cultured on the nano-topographic structures. This study shed potential applications of these nano-topographic biomaterials for controlling cell development in tissue engineering or basic cell biology research.

Keywords: Nanosponge, Cell adhesion, Cell morphology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1511
142 Large Eddy Simulation of Compartment Fire with Gas Combustible

Authors: Mliki Bouchmel, Abbassi Mohamed Ammar, Kamel Geudri, Chrigui Mouldi, Omri Ahmed

Abstract:

The objective of this work is to use the Fire Dynamics Simulator (FDS) to investigate the behavior of a kerosene small-scale fire. FDS is a Computational Fluid Dynamics (CFD) tool developed specifically for fire applications. Throughout its development, FDS is used for the resolution of practical problems in fire protection engineering. At the same time FDS is used to study fundamental fire dynamics and combustion. Predictions are based on Large Eddy Simulation (LES) with a Smagorinsky turbulence model. LES directly computes the large-scale eddies and the sub-grid scale dissipative processes are modeled. This technique is the default turbulence model which was used in this study. The validation of the numerical prediction is done using a direct comparison of combustion output variables to experimental measurements. Effect of the mesh size on the temperature evolutions is investigated and optimum grid size is suggested. Effect of width openings is investigated. Temperature distribution and species flow are presented for different operating conditions. The effect of the composition of the used fuel on atmospheric pollution is also a focus point within this work. Good predictions are obtained where the size of the computational cells within the fire compartment is less than 1/10th of the characteristic fire diameter.

Keywords: Large eddy simulation, Radiation, Turbulence, combustion, pollution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2135
141 Increase of Organization in Complex Systems

Authors: Georgi Yordanov Georgiev, Michael Daly, Erin Gombos, Amrit Vinod, Gajinder Hoonjan

Abstract:

Measures of complexity and entropy have not converged to a single quantitative description of levels of organization of complex systems. The need for such a measure is increasingly necessary in all disciplines studying complex systems. To address this problem, starting from the most fundamental principle in Physics, here a new measure for quantity of organization and rate of self-organization in complex systems based on the principle of least (stationary) action is applied to a model system - the central processing unit (CPU) of computers. The quantity of organization for several generations of CPUs shows a double exponential rate of change of organization with time. The exact functional dependence has a fine, S-shaped structure, revealing some of the mechanisms of self-organization. The principle of least action helps to explain the mechanism of increase of organization through quantity accumulation and constraint and curvature minimization with an attractor, the least average sum of actions of all elements and for all motions. This approach can help describe, quantify, measure, manage, design and predict future behavior of complex systems to achieve the highest rates of self organization to improve their quality. It can be applied to other complex systems from Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others where complex systems are present.

Keywords: Organization, self-organization, complex system, complexification, quantitative measure, principle of least action, principle of stationary action, attractor, progressive development, acceleration, stochastic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598
140 Three-Dimensional Simulation of Free Electron Laser with Prebunching and Efficiency Enhancement

Authors: M. Chitsazi, B. Maraghechi, M. H. Rouhani

Abstract:

Three-dimensional simulation of harmonic up generation in free electron laser amplifier operating simultaneously with a cold and relativistic electron beam is presented in steady-state regime where the slippage of the electromagnetic wave with respect to the electron beam is ignored. By using slowly varying envelope approximation and applying the source-dependent expansion to wave equations, electromagnetic fields are represented in terms of the Hermit Gaussian modes which are well suited for the planar wiggler configuration. The electron dynamics is described by the fully threedimensional Lorentz force equation in presence of the realistic planar magnetostatic wiggler and electromagnetic fields. A set of coupled nonlinear first-order differential equations is derived and solved numerically. The fundamental and third harmonic radiation of the beam is considered. In addition to uniform beam, prebunched electron beam has also been studied. For this effect of sinusoidal distribution of entry times for the electron beam on the evolution of radiation is compared with uniform distribution. It is shown that prebunching reduces the saturation length substantially. For efficiency enhancement the wiggler is set to decrease linearly when the radiation of the third harmonic saturates. The optimum starting point of tapering and the slope of radiation in the amplitude of wiggler are found by successive run of the code.

Keywords: Free electron laser, Prebunching, Undulator, Wiggler.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1422
139 Design, Simulation, and Implementation of a Digital Pulse Oxygen Saturation Measurement System Using the Arduino Microcontroller

Authors: Muhibul Haque Bhuyan, Md. Refat Sarder

Abstract:

If a person can monitor his/her oxygen saturation level intermittently then he/she can identify his/her condition early and thus he/she can seek a doctor’s help. This paper reports the design, simulation, and implementation of a low-cost pulse oxygen saturation measurement device based on a reflective photoplethysmography (PPG) system using an integrated circuit sensor as the fundamental component of this health status checking device. The measurement of the physiological parameter is the blood oxygen saturation level (SpO2) in the peripheral capillary. This work has been implemented using an Arduino Uno R3 microcontroller along with this sensor integrated circuit (IC). The system is designed in the Proteus environment and then simulated to check its performance. After that, the hardware implementation is performed. We used a clipping type optical sensor to sense the arterial oxygen saturation level of blood signal from the fingertips of an individual and then transformed it into the digital data in the microcontroller through its programming its instruction. The designed system was tested by measuring the SpO2 level for several people of different ages, from 12 to 57 years of age. Besides, the same people were tested using a standard machine purchased from the market. Test results were found very satisfactory as the average percentage of error was very low, 1.59% only.

Keywords: Digital pulse oxygen saturation level, oximeter, measurement, design, simulation, implementation, proteus, Arduino Uno microcontroller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1769
138 Simple Agents Benefit Only from Simple Brains

Authors: Valeri A. Makarov, Nazareth P. Castellanos, Manuel G. Velarde

Abstract:

In order to answer the general question: “What does a simple agent with a limited life-time require for constructing a useful representation of the environment?" we propose a robot platform including the simplest probabilistic sensory and motor layers. Then we use the platform as a test-bed for evaluation of the navigational capabilities of the robot with different “brains". We claim that a protocognitive behavior is not a consequence of highly sophisticated sensory–motor organs but instead emerges through an increment of the internal complexity and reutilization of the minimal sensory information. We show that the most fundamental robot element, the short-time memory, is essential in obstacle avoidance. However, in the simplest conditions of no obstacles the straightforward memoryless robot is usually superior. We also demonstrate how a low level action planning, involving essentially nonlinear dynamics, provides a considerable gain to the robot performance dynamically changing the robot strategy. Still, however, for very short life time the brainless robot is superior. Accordingly we suggest that small organisms (or agents) with short life-time does not require complex brains and even can benefit from simple brain-like (reflex) structures. To some extend this may mean that controlling blocks of modern robots are too complicated comparative to their life-time and mechanical abilities.

Keywords: Neural network, probabilistic control, robot navigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1392
137 Linking Urban Planning and Water Planning to Achieve Sustainable Development and Liveability Outcomes in the New Growth Areas of Melbourne, Australia

Authors: Dennis Corbett

Abstract:

The city of Melbourne in Victoria, Australia, provides a number of examples of how a growing city can integrate urban planning and water planning to achieve sustainable urban development, environmental protection, liveability and integrated water management outcomes, and move towards becoming a “Water Sensitive City". Three examples are provided - the development at Botanic Ridge, where a 318 hectare residential development is being planned and where integrated water management options are being implemented using a “triple bottom line" sustainability investment approach; the Toolern development, which will capture and reuse stormwater and recycled water to greatly reduce the suburb-s demand for potable water, and the development at Kalkallo where a 1,200 hectare industrial precinct development is planned which will merge design of the development's water supply, sewerage services and stormwater system. The Paper argues that an integrated urban planning and water planning approach is fundamental to creating liveable, vibrant communities which meet social and financial needs while being in harmony with the local environment. Further work is required on developing investment frameworks and risk analysis frameworks to ensure that all possible solutions can be assessed equally.

Keywords: Integrated water management, stormwater management, sustainable urban development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2057
136 Selective Harmonic Elimination of PWM AC/AC Voltage Controller Using Hybrid RGA-PS Approach

Authors: A. K. Al-Othman, Nabil A. Ahmed, A. M. Al-Kandari, H. K. Ebraheem

Abstract:

Selective harmonic elimination-pulse width modulation techniques offer a tight control of the harmonic spectrum of a given voltage waveform generated by a power electronic converter along with a low number of switching transitions. Traditional optimization methods suffer from various drawbacks, such as prolonged and tedious computational steps and convergence to local optima; thus, the more the number of harmonics to be eliminated, the larger the computational complexity and time. This paper presents a novel method for output voltage harmonic elimination and voltage control of PWM AC/AC voltage converters using the principle of hybrid Real-Coded Genetic Algorithm-Pattern Search (RGA-PS) method. RGA is the primary optimizer exploiting its global search capabilities, PS is then employed to fine tune the best solution provided by RGA in each evolution. The proposed method enables linear control of the fundamental component of the output voltage and complete elimination of its harmonic contents up to a specified order. Theoretical studies have been carried out to show the effectiveness and robustness of the proposed method of selective harmonic elimination. Theoretical results are validated through simulation studies using PSIM software package.

Keywords: PWM, AC/AC voltage converters, selectiveharmonic elimination, direct search method, pattern search method, Real-coded Genetic algorithms, evolutionary algorithms andoptimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3277
135 Comparison of Phylogenetic Trees of Multiple Protein Sequence Alignment Methods

Authors: Khaddouja Boujenfa, Nadia Essoussi, Mohamed Limam

Abstract:

Multiple sequence alignment is a fundamental part in many bioinformatics applications such as phylogenetic analysis. Many alignment methods have been proposed. Each method gives a different result for the same data set, and consequently generates a different phylogenetic tree. Hence, the chosen alignment method affects the resulting tree. However in the literature, there is no evaluation of multiple alignment methods based on the comparison of their phylogenetic trees. This work evaluates the following eight aligners: ClustalX, T-Coffee, SAGA, MUSCLE, MAFFT, DIALIGN, ProbCons and Align-m, based on their phylogenetic trees (test trees) produced on a given data set. The Neighbor-Joining method is used to estimate trees. Three criteria, namely, the dNNI, the dRF and the Id_Tree are established to test the ability of different alignment methods to produce closer test tree compared to the reference one (true tree). Results show that the method which produces the most accurate alignment gives the nearest test tree to the reference tree. MUSCLE outperforms all aligners with respect to the three criteria and for all datasets, performing particularly better when sequence identities are within 10-20%. It is followed by T-Coffee at lower sequence identity (<10%), Align-m at 20-30% identity, and ClustalX and ProbCons at 30-50% identity. Also, it is noticed that when sequence identities are higher (>30%), trees scores of all methods become similar.

Keywords: Multiple alignment methods, phylogenetic trees, Neighbor-Joining method, Robinson-Foulds distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
134 Optimum Locations for Intercity Bus Terminals with the AHP Approach – Case Study of the City of Esfahan

Authors: Mehrdad Arabi, Ehsan Beheshtitabar, Bahador Ghadirifaraz, Behrooz Forjanizadeh

Abstract:

Interaction between human, location and activity defines space. In the framework of these relations, space is a container for current specifications in relations of the 3 mentioned elements. The change of land utility considered with average performance range, urban regulations, society requirements etc. will provide welfare and comfort for citizens. From an engineering view it is fundamental that choosing a proper location for a specific civil activity requires evaluation of locations from different perspectives. The debate of desirable establishment of municipal service elements in urban regions is one of the most important issues related to urban planning. In this paper, the research type is applicable based on goal, and is descriptive and analytical based on nature. Initially existing terminals in Esfahan are surveyed and then new locations are presented based on evaluated criteria. In order to evaluate terminals based on the considered factors, an AHP model is used at first to estimate weight of different factors and then existing and suggested locations are evaluated using Arc GIS software and AHP model results. The results show that existing bus terminals are located in fairly proper locations. Further results of this study suggest new locations to establish terminals based on urban criteria.

Keywords: Arc GIS, Esfahan city, Optimum locations, Terminals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2460
133 Turbulent Mixing and its Effects on Thermal Fatigue in Nuclear Reactors

Authors: Eggertson, E.C. Kapulla, R, Fokken, J, Prasser, H.M.

Abstract:

The turbulent mixing of coolant streams of different temperature and density can cause severe temperature fluctuations in piping systems in nuclear reactors. In certain periodic contraction cycles these conditions lead to thermal fatigue. The resulting aging effect prompts investigation in how the mixing of flows over a sharp temperature/density interface evolves. To study the fundamental turbulent mixing phenomena in the presence of density gradients, isokinetic (shear-free) mixing experiments are performed in a square channel with Reynolds numbers ranging from 2-500 to 60-000. Sucrose is used to create the density difference. A Wire Mesh Sensor (WMS) is used to determine the concentration map of the flow in the cross section. The mean interface width as a function of velocity, density difference and distance from the mixing point are analyzed based on traditional methods chosen for the purposes of atmospheric/oceanic stratification analyses. A definition of the mixing layer thickness more appropriate to thermal fatigue and based on mixedness is devised. This definition shows that the thermal fatigue risk assessed using simple mixing layer growth can be misleading and why an approach that separates the effects of large scale (turbulent) and small scale (molecular) mixing is necessary.

Keywords: Concentration measurements, Mixedness, Stablystratified turbulent isokinetic mixing layer, Wire mesh sensor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2188
132 One-DOF Precision Position Control using the Combined Piezo-VCM Actuator

Authors: Yung-Tien Liu, Chun-Chao Wang

Abstract:

This paper presents the control performance of a high-precision positioning device using the hybrid actuator composed of a piezoelectric (PZT) actuator and a voice-coil motor (VCM). The combined piezo-VCM actuator features two main characteristics: a large operation range due to long stroke of the VCM, and high precision and heavy load positioning ability due to PZT impact force. A one-degree-of-freedom (DOF) experimental setup was configured to examine the fundamental characteristics, and the control performance was effectively demonstrated by using a switching controller. In rough positioning state, an integral variable structure controller (IVSC) was used for the VCM to conduct long range of operation; in precision positioning state, an impact force controller (IFC) for the PZT actuator coupled with presliding states of the sliding table was used to obtain high-precision position control and achieve both forward and backward actuations. The experimental results showed that the sliding table having a mass of 881g and with a preload of 10 N was successfully positioned within the positioning accuracy of 10 nm in both forward and backward position controls.

Keywords: Integral variable structure controller (IVSC), impact force, precision positioning, presliding, PZT actuator, voice-coil motor (VCM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
131 Understanding the Selectional Preferences of the Twitter Mentions Network

Authors: R. Sudhesh Solomon, P. Y. K. L. Srinivas, Abhay Narayan, Amitava Das

Abstract:

Users in social networks either unicast or broadcast their messages. At mention is the popular way of unicasting for Twitter whereas general tweeting could be considered as broadcasting method. Understanding the information flow and dynamics within a Social Network and modeling the same is a promising and an open research area called Information Diffusion. This paper seeks an answer to a fundamental question - understanding if the at-mention network or the unicasting pattern in social media is purely random in nature or is there any user specific selectional preference? To answer the question we present an empirical analysis to understand the sociological aspects of Twitter mentions network within a social network community. To understand the sociological behavior we analyze the values (Schwartz model: Achievement, Benevolence, Conformity, Hedonism, Power, Security, Self-Direction, Stimulation, Traditional and Universalism) of all the users. Empirical results suggest that values traits are indeed salient cue to understand how the mention-based communication network functions. For example, we notice that individuals possessing similar values unicast among themselves more often than with other value type people. We also observe that traditional and self-directed people do not maintain very close relationship in the network with the people of different values traits.

Keywords: Social network analysis, information diffusion, personality and values, Twitter Mentions Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 676
130 The Study of the Interaction between Catanionic Surface Micelle SDS-CTAB and Insulin at Air/Water Interface

Authors: B. Tah, P. Pal, M. Mahato, R. Sarkar, G. B. Talapatra

Abstract:

Herein, we report the different types of surface morphology due to the interaction between the pure protein Insulin (INS) and catanionic surfactant mixture of Sodium Dodecyl Sulfate (SDS) and Cetyl Trimethyl Ammonium Bromide (CTAB) at air/water interface obtained by the Langmuir-Blodgett (LB) technique. We characterized the aggregations by Scanning Electron Microscopy (SEM), Atomic Force Microscopy (AFM) and Fourier transform infrared spectroscopy (FTIR) in LB films. We found that the INS adsorption increased in presence of catanionic surfactant at air/water interface. The presence of small amount of surfactant induces two-stage growth kinetics due to the pure protein absorption and protein-catanionic surface micelle interaction. The protein remains in native state in presence of small amount of surfactant mixture. Smaller amount of surfactant mixture with INS is producing surface micelle type structure. This may be considered for drug delivery system. On the other hand, INS becomes unfolded and fibrillated in presence of higher amount of surfactant mixture. In both the cases, the protein was successfully immobilized on a glass substrate by the LB technique. These results may find applications in the fundamental science of the physical chemistry of surfactant systems, as well as in the preparation of drug-delivery system.

Keywords: Air/water interface, Catanionic micelle, Insulin, Langmuir-Blodgett film

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2436
129 Governance, Risk Management, and Compliance Factors Influencing the Adoption of Cloud Computing in Australia

Authors: Tim Nedyalkov

Abstract:

A business decision to move to the cloud brings fundamental changes in how an organization develops and delivers its Information Technology solutions. The accelerated pace of digital transformation across businesses and government agencies increases the reliance on cloud-based services. Collecting, managing, and retaining large amounts of data in cloud environments make information security and data privacy protection essential. It becomes even more important to understand what key factors drive successful cloud adoption following the commencement of the Privacy Amendment Notifiable Data Breaches (NDB) Act 2017 in Australia as the regulatory changes impact many organizations and industries. This quantitative correlational research investigated the governance, risk management, and compliance factors contributing to cloud security success. The factors influence the adoption of cloud computing within an organizational context after the commencement of the NDB scheme. The results and findings demonstrated that corporate information security policies, data storage location, management understanding of data governance responsibilities, and regular compliance assessments are the factors influencing cloud computing adoption. The research has implications for organizations, future researchers, practitioners, policymakers, and cloud computing providers to meet the rapidly changing regulatory and compliance requirements.

Keywords: Cloud compliance, cloud security, cloud security governance, data governance, privacy protection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 838
128 Aggregation Scheduling Algorithms in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.

Keywords: Data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 761
127 Time-Domain Analysis Approaches of Soil-Structure Interaction: A Comparative Study

Authors: Abdelrahman Taha, Niloofar Malekghaini, Hamed Ebrahimian, Ramin Motamed

Abstract:

This paper compares the substructure and direct approaches for soil-structure interaction (SSI) analysis in the time domain. In the substructure approach, the soil domain is replaced by a set of springs and dashpots, also referred to as the impedance function, derived through the study of the behavior of a massless rigid foundation. The impedance function is inherently frequency dependent, i.e., it varies as a function of the frequency content of the structural response. To use the frequency-dependent impedance function for time-domain SSI analysis, the impedance function is approximated at the fundamental frequency of the coupled soil-structure system. To explore the potential limitations of the substructure modeling process, a two-dimensional (2D) reinforced concrete frame structure is modeled and analyzed using the direct and substructure approaches. The results show discrepancy between the simulated responses of the direct and substructure models. It is concluded that the main source of discrepancy is likely attributed to the way the impedance functions are calculated, i.e., assuming a massless rigid foundation without considering the presence of the superstructure. Hence, a refined impedance function, considering the presence of the superstructure, shall alternatively be developed. This refined impedance function is expected to improve the simulation accuracy of the substructure approach.

Keywords: Direct approach, impedance function, massless rigid foundation, soil-structure interaction, substructure approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 390
126 Early Age Behavior of Wind Turbine Gravity Foundations

Authors: J. Modu, J. F. Georgin, L. Briançon, E. Antoinet

Abstract:

Wind turbine gravity foundations are designed to resist overturning failure through gravitational forces resulting from their masses. Owing to the relatively high volume of the cementitious material present, the foundations tend to suffer thermal strains and internal cracking due to high temperatures and temperature gradients depending on factors such as geometry, mix design and level of restraint. This is a result of a fully coupled mechanism commonly known as THMC (Thermo- Hygro - Mechanical - Chemical) coupling whose kinetics peak during the early age of concrete. The focus of this paper is therefore to present and offer a discussion on the temperature and humidity evolutions occurring in mass pours such as wind turbine gravity foundations based on sensor results obtained from the monitoring of an actual wind turbine foundation. To offer prediction of the evolutions, the formulation of a 3D Thermal-Hydro-Chemical (THC) model that is mainly derived from classical fundamental physical laws is also presented and discussed. The THC model can be mathematically fully coupled in Finite Element analyses. In the current study, COMSOL Multi-physics software was used to simulate the 3D THC coupling that occurred in the monitored wind turbine foundation to predict the temperature evolution at five different points within the foundation from time of casting.

Keywords: Early age behavior, reinforced concrete, THC 3D models, wind turbines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 388
125 Pattern Recognition Based Prosthesis Control for Movement of Forearms Using Surface and Intramuscular EMG Signals

Authors: Anjana Goen, D. C. Tiwari

Abstract:

Myoelectric control system is the fundamental component of modern prostheses, which uses the myoelectric signals from an individual’s muscles to control the prosthesis movements. The surface electromyogram signal (sEMG) being noninvasive has been used as an input to prostheses controllers for many years. Recent technological advances has led to the development of implantable myoelectric sensors which enable the internal myoelectric signal (MES) to be used as input to these prostheses controllers. The intramuscular measurement can provide focal recordings from deep muscles of the forearm and independent signals relatively free of crosstalk thus allowing for more independent control sites. However, little work has been done to compare the two inputs. In this paper we have compared the classification accuracy of six pattern recognition based myoelectric controllers which use surface myoelectric signals recorded using untargeted (symmetric) surface electrode arrays to the same controllers with multichannel intramuscular myolectric signals from targeted intramuscular electrodes as inputs. There was no significant enhancement in the classification accuracy as a result of using the intramuscular EMG measurement technique when compared to the results acquired using the surface EMG measurement technique. Impressive classification accuracy (99%) could be achieved by optimally selecting only five channels of surface EMG.

Keywords: Discriminant Locality Preserving Projections (DLPP), myoelectric signal (MES), Sparse Principal Component Analysis (SPCA), Time Frequency Representations (TFRs).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1359
124 Linking Sustainable Public Procurement and the Sustainable Development Goals in Zambia: A Preliminary Investigation

Authors: Charles P. Mukumba, Kahilu K. Shakantu

Abstract:

Achieving the Sustainable Development Goals (SDGs) is critical to achieving transformational results that support Zambia's developmental agenda. Public procurement is integral to the government's mission to deliver goods and services in a timely and economical manner beyond the value of money spent. This study explores the link between sustainable public procurement and the SDGs in Zambia. To validate the established links with public sector procurement in Zambia, the study employed qualitative research using semi-structured interviews with 12 public procurement officials. The collected data were analysed using thematic analysis. The findings indicate that public procurement plays a fundamental role in achieving the SDGs by helping deliver core public services that support SDGs and systematizing and co-delivering added value along the way. The study further established the importance of sustainable public procurement within the development context. The interviews were limited to mainstream public sector procurement entities in Lusaka, Zambia. Sustainable public procurement actions have the potential to impact SDGs. Promoting sustainable public procurement will enhance sustainable development and significantly improve the supply chain, benefiting the economy, society and environment. Findings will inform policy-makers how to strategically design sustainable public procurement policy by attuning it to procuring entities' objectives and priorities to contribute to attaining SDGs.

Keywords: Sustainable public procurement, sustainable development goals, SDG targets, Zambia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 96
123 Study of Human Upper Arm Girth during Elbow Isokinetic Contractions Based on a Smart Circumferential Measuring System

Authors: Xi Wang, Xiaoming Tao, Raymond C. H. So

Abstract:

As one of the convenient and noninvasive sensing approaches, the automatic limb girth measurement has been applied to detect intention behind human motion from muscle deformation. The sensing validity has been elaborated by preliminary researches but still need more fundamental studies, especially on kinetic contraction modes. Based on the novel fabric strain sensors, a soft and smart limb girth measurement system was developed by the authors’ group, which can measure the limb girth in-motion. Experiments were carried out on elbow isometric flexion and elbow isokinetic flexion (biceps’ isokinetic contractions) of 90°/s, 60°/s, and 120°/s for 10 subjects (2 canoeists and 8 ordinary people). After removal of natural circumferential increments due to elbow position, the joint torque is found not uniformly sensitive to the limb circumferential strains, but declining as elbow joint angle rises, regardless of the angular speed. Moreover, the maximum joint torque was found as an exponential function of the joint’s angular speed. This research highly contributes to the application of the automatic limb girth measuring during kinetic contractions, and it is useful to predict the contraction level of voluntary skeletal muscles.

Keywords: Fabric strain sensor, muscle deformation, isokinetic contraction, joint torque, limb girth strain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066
122 Heat Transfer and Entropy Generation in a Partial Porous Channel Using LTNE and Exothermicity/Endothermicity Features

Authors: Mohsen Torabi, Nader Karimi, Kaili Zhang

Abstract:

This work aims to provide a comprehensive study on the heat transfer and entropy generation rates of a horizontal channel partially filled with a porous medium which experiences internal heat generation or consumption due to exothermic or endothermic chemical reaction. The focus has been given to the local thermal non-equilibrium (LTNE) model. The LTNE approach helps us to deliver more accurate data regarding temperature distribution within the system and accordingly to provide more accurate Nusselt number and entropy generation rates. Darcy-Brinkman model is used for the momentum equations, and constant heat flux is assumed for boundary conditions for both upper and lower surfaces. Analytical solutions have been provided for both velocity and temperature fields. By incorporating the investigated velocity and temperature formulas into the provided fundamental equations for the entropy generation, both local and total entropy generation rates are plotted for a number of cases. Bifurcation phenomena regarding temperature distribution and interface heat flux ratio are observed. It has been found that the exothermicity or endothermicity characteristic of the channel does have a considerable impact on the temperature fields and entropy generation rates.

Keywords: Entropy generation, exothermicity, endothermicity, forced convection, local thermal non-equilibrium, analytical modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 829
121 The Analysis of Secondary Case Studies as a Starting Point for Grounded Theory Studies: An Example from the Enterprise Software Industry

Authors: Abilio Avila, Orestis Terzidis

Abstract:

A fundamental principle of Grounded Theory (GT) is to prevent the formation of preconceived theories. This implies the need to start a research study with an open mind and to avoid being absorbed by the existing literature. However, to start a new study without an understanding of the research domain and its context can be extremely challenging. This paper presents a research approach that simultaneously supports a researcher to identify and to focus on critical areas of a research project and prevent the formation of prejudiced concepts by the current body of literature. This approach comprises of four stages: Selection of secondary case studies, analysis of secondary case studies, development of an initial conceptual framework, development of an initial interview guide. The analysis of secondary case studies as a starting point for a research project allows a researcher to create a first understanding of a research area based on real-world cases without being influenced by the existing body of theory. It enables a researcher to develop through a structured course of actions a firm guide that establishes a solid starting point for further investigations. Thus, the described approach may have significant implications for GT researchers who aim to start a study within a given research area.

Keywords: Grounded theory, qualitative research, secondary case studies, secondary data analysis, interview guide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1799
120 A Novel Approach for Coin Identification using Eigenvalues of Covariance Matrix, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper we present a new method for coin identification. The proposed method adopts a hybrid scheme using Eigenvalues of covariance matrix, Circular Hough Transform (CHT) and Bresenham-s circle algorithm. The statistical and geometrical properties of the small and large Eigenvalues of the covariance matrix of a set of edge pixels over a connected region of support are explored for the purpose of circular object detection. Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain only a small number of non-zero elements, they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of the circumference pixels is identified using Raster scan algorithm which uses geometrical symmetry property. After finding circular objects, the proposed method uses the texture on the surface of the coins called texton, which are unique properties of coins, refers to the fundamental micro structure in generic natural images. This method has been tested on several real world images including coin and non-coin images. The performance is also evaluated based on the noise withstanding capability.

Keywords: Circular Hough Transform, Coin detection, Covariance matrix, Eigenvalues, Raster scan Algorithm, Texton.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1843
119 Synthesizing an Artificial Loess for Geotechnical Investigations of Collapsible Soil Behavior

Authors: Hamed Sadeghi, Pouya A. Panahi, Hamed Nasiri, Mohammad Sadeghi

Abstract:

Collapsible soils like loess comprise an important category of problematic soils for construction purposes and sustainable development. As a result, research on both geological and geotechnical aspects of this type of soil have been in progress for decades. However, considerable natural variability in physical properties of in-situ loess strata even in a single block sample challenges the fundamental laboratory investigations. The reason behind this is that it is somehow impossible to remove the effect of a specific factor like void ratio from fair comparisons to come with a reliable conclusion. In order to cope with this limitation, two types of artificially made dispersive and calcareous loess are introduced which can be easily reproduced in any soil mechanics laboratory provided that all its compositions are known and controlled. The collapse potential is explored for a variety of soil water salinity and lime content and comparisons are made against the natural soil behavior. Trends are reported for the influence of pore water salinity on collapse potential under different osmotic flow conditions. The most important advantage of artificial loess is the ease of controlling cementing agent content like calcite or dispersive potential for studying their influence on mechanical soil behavior.

Keywords: Artificial loess, unsaturated soils, collapse potential, dispersive clays, laboratory tests.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683
118 Under the Veneer of Words Lies Power: Foucauldian Analysis of Oleanna

Authors: D. Arjmandi

Abstract:

The notion of power and gender domination is one of the inseparable aspects of themes in postmodern literature. The reason of its importance has been discussed frequently since the rise of Michel Foucault and his insight into the circulation of power and the transgression of forces. Language and society operate as the basic grounds for the study, as all human beings are bound to the set of rules and norms which shape them in the acceptable way in the macrocosm. How different genders in different positions behave and show reactions to the provocation of social forces and superiority of one another is of great interest to writers and literary critics. Mamet’s works are noticeable for their controversial but timely themes which illustrate human conflicts with the society and greed for power. Many critics like Christopher Bigsby and Harold Bloom have discussed Mamet and his ideas in recent years. This paper is the study of Oleanna, Mamet’s masterpiece about the teacher-student relationship and the circulation of power between a man and woman. He shows the very breakable boundaries in the domination of a gender and the downfall of speech as the consequence of transgression and freedom. The failure of the language the teacher uses and the abuse of his own words by a student who seeks superiority and knowledge are the main subjects of the discussion. Supported by the ideas of Foucault, the language Mamet uses to present his characters becomes the fundamental premise in this study. As a result, language becomes both the means of achievement and downfall.

Keywords: Domination, foucault, language, mamet, oleanna, power, transgression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2422
117 Metal Ship and Robotic Car: A Hands-On Activity to Develop Scientific and Engineering Skills for High School Students

Authors: Jutharat Sunprasert, Ekapong Hirunsirisawat, Narongrit Waraporn, Somporn Peansukmanee

Abstract:

Metal Ship and Robotic Car is one of the hands-on activities in the course, the Fundamental of Engineering that can be divided into three parts. The first part, the metal ships, was made by using engineering drawings, physics and mathematics knowledge. The second part is where the students learned how to construct a robotic car and control it using computer programming. In the last part, the students had to combine the workings of these two objects in the final testing. This aim of study was to investigate the effectiveness of hands-on activity by integrating Science, Technology, Engineering and Mathematics (STEM) concepts to develop scientific and engineering skills. The results showed that the majority of students felt this hands-on activity lead to an increased confidence level in the integration of STEM. Moreover, 48% of all students engaged well with the STEM concepts. Students could obtain the knowledge of STEM through hands-on activities with the topics science and mathematics, engineering drawing, engineering workshop and computer programming; most students agree and strongly agree with this learning process. This indicated that the hands-on activity: “Metal Ship and Robotic Car” is a useful tool to integrate each aspect of STEM. Furthermore, hands-on activities positively influence a student’s interest which leads to increased learning achievement and also in developing scientific and engineering skills.

Keywords: Hands-on activity, STEM education, computer programming, metal work.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 924
116 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 617
115 A Detailed Experimental Study of the Springback Anisotropy of Three Metals using the Stretching-Bending Process

Authors: A. Soualem

Abstract:

Springback is a significant problem in the sheet metal forming process. When the tools are released after the stage of forming, the product springs out, because of the action of the internal stresses. In many cases the deviation of form is too large and the compensation of the springback is necessary. The precise prediction of the springback of product is increasingly significant for the design of the tools and for compensation because of the higher ratio of the yield stress to the elastic modulus. The main object in this paper was to study the effect of the anisotropy on the springback for three directions of rolling: 0°, 45° and 90°. At the same time, we highlighted the influence of three different metallic materials: Aluminum, Steel and Galvanized steel. The original of our purpose consist on tests which are ensured by adapting a U-type stretching-bending device on a tensile testing machine, where we studied and quantified the variation of the springback according to the direction of rolling. We also showed the role of lubrication in the reduction of the springback. Moreover, in this work, we have studied important characteristics in deep drawing process which is a springback. We have presented defaults that are showed in this process and many parameters influenced a springback. Finally, our results works lead us to understand the influence of grains orientation with different metallic materials on the springback and drawing some conclusions how to concept deep drawing tools. In addition, the conducted work represents a fundamental contribution in the discussion the industry application.

Keywords: Deep-Drawing, Grains orientation, Laminate Tool, Springback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
114 Response Delay Model: Bridging the Gap in Urban Fire Disaster Response System

Authors: Sulaiman Yunus

Abstract:

The need for modeling response to urban fire disaster cannot be over emphasized, as recurrent fire outbreaks have gutted most cities of the world. This necessitated the need for a prompt and efficient response system in order to mitigate the impact of the disaster. Promptness, as a function of time, is seen to be the fundamental determinant for efficiency of a response system and magnitude of a fire disaster. Delay, as a result of several factors, is one of the major determinants of promptgness of a response system and also the magnitude of a fire disaster. Response Delay Model (RDM) intends to bridge the gap in urban fire disaster response system through incorporating and synchronizing the delay moments in measuring the overall efficiency of a response system and determining the magnitude of a fire disaster. The model identified two delay moments (pre-notification and Intra-reflex sequence delay) that can be elastic and collectively plays a significant role in influencing the efficiency of a response system. Due to variation in the elasticity of the delay moments, the model provides for measuring the length of delays in order to arrive at a standard average delay moment for different parts of the world, putting into consideration geographic location, level of preparedness and awareness, technological advancement, socio-economic and environmental factors. It is recommended that participatory researches should be embarked on locally and globally to determine standard average delay moments within each phase of the system so as to enable determining the efficiency of response systems and predicting fire disaster magnitudes.

Keywords: Delay moment, fire disaster, reflex sequence, response, response delay moment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 671