Search results for: dendritic computation
58 Double Wishbone Pushrod Suspension Systems Co-Simulation for Racing Applications
Authors: Suleyman Ogul Ertugrul, Mustafa Turgut, Serkan Inandı, Mustafa Gorkem Coban, Mustafa Kıgılı, Ali Mert, Oguzhan Kesmez, Murat Ozancı, Caglar Uyulan
Abstract:
In high-performance automotive engineering, the realistic simulation of suspension systems is crucial for enhancing vehicle dynamics and handling. This study focuses on the double wishbone suspension system, prevalent in racing vehicles due to its superior control and stability characteristics. Utilizing MATLAB and Adams Car simulation software, we conduct a comprehensive analysis of displacement behaviors and damper sizing under various dynamic conditions. The initial phase involves using MATLAB to simulate the entire suspension system, allowing for the preliminary determination of damper size based on the system's response under simulated conditions. Following this, manual calculations of wheel loads are performed to assess the forces acting on the front and rear suspensions during scenarios such as braking, cornering, maximum vertical loads, and acceleration. Further dynamic force analysis is carried out using MATLAB Simulink, focusing on the interactions between suspension components during key movements such as bumps and rebounds. This simulation helps in formulating precise force equations and in calculating the stiffness of the suspension springs. To enhance the accuracy of our findings, we focus on a detailed kinematic and dynamic analysis. This includes the creation of kinematic loops, derivation of relevant equations, and computation of Jacobian matrices to accurately determine damper travel and compression metrics. The calculated spring stiffness is crucial in selecting appropriate springs to ensure optimal suspension performance. To validate and refine our results, we replicate the analyses using the Adams Car software, renowned for its detailed handling of vehicular dynamics. The goal is to achieve a robust, reliable suspension setup that maximizes performance under the extreme conditions encountered in racing scenarios. This study exemplifies the integration of theoretical mechanics with advanced simulation tools to achieve a high-performance suspension setup that can significantly improve race car performance, providing a methodology that can be adapted for different types of racing vehicles.Keywords: FSAE, suspension system, Adams Car, kinematic
Procedia PDF Downloads 4957 Transformation of Periodic Fuzzy Membership Function to Discrete Polygon on Circular Polar Coordinates
Authors: Takashi Mitsuishi
Abstract:
Fuzzy logic has gained acceptance in the recent years in the fields of social sciences and humanities such as psychology and linguistics because it can manage the fuzziness of words and human subjectivity in a logical manner. However, the major field of application of the fuzzy logic is control engineering as it is a part of the set theory and mathematical logic. Mamdani method, which is the most popular technique for approximate reasoning in the field of fuzzy control, is one of the ways to numerically represent the control afforded by human language and sensitivity and has been applied in various practical control plants. Fuzzy logic has been gradually developing as an artificial intelligence in different applications such as neural networks, expert systems, and operations research. The objects of inference vary for different application fields. Some of these include time, angle, color, symptom and medical condition whose fuzzy membership function is a periodic function. In the defuzzification stage, the domain of the membership function should be unique to obtain uniqueness its defuzzified value. However, if the domain of the periodic membership function is determined as unique, an unintuitive defuzzified value may be obtained as the inference result using the center of gravity method. Therefore, the authors propose a method of circular-polar-coordinates transformation and defuzzification of the periodic membership functions in this study. The transformation to circular polar coordinates simplifies the domain of the periodic membership function. Defuzzified value in circular polar coordinates is an argument. Furthermore, it is required that the argument is calculated from a closed plane figure which is a periodic membership function on the circular polar coordinates. If the closed plane figure is continuous with the continuity of the membership function, a significant amount of computation is required. Therefore, to simplify the practice example and significantly reduce the computational complexity, we have discretized the continuous interval and the membership function in this study. In this study, the following three methods are proposed to decide the argument from the discrete polygon which the continuous plane figure is transformed into. The first method provides an argument of a straight line passing through the origin and through the coordinate of the arithmetic mean of each coordinate of the polygon (physical center of gravity). The second one provides an argument of a straight line passing through the origin and the coordinate of the geometric center of gravity of the polygon. The third one provides an argument of a straight line passing through the origin bisecting the perimeter of the polygon (or the closed continuous plane figure).Keywords: defuzzification, fuzzy membership function, periodic function, polar coordinates transformation
Procedia PDF Downloads 36256 The Effective Use of the Network in the Distributed Storage
Authors: Mamouni Mohammed Dhiya Eddine
Abstract:
This work aims at studying the exploitation of high-speed networks of clusters for distributed storage. Parallel applications running on clusters require both high-performance communications between nodes and efficient access to the storage system. Many studies on network technologies led to the design of dedicated architectures for clusters with very fast communications between computing nodes. Efficient distributed storage in clusters has been essentially developed by adding parallelization mechanisms so that the server(s) may sustain an increased workload. In this work, we propose to improve the performance of distributed storage systems in clusters by efficiently using the underlying high-performance network to access distant storage systems. The main question we are addressing is: do high-speed networks of clusters fit the requirements of a transparent, efficient and high-performance access to remote storage? We show that storage requirements are very different from those of parallel computation. High-speed networks of clusters were designed to optimize communications between different nodes of a parallel application. We study their utilization in a very different context, storage in clusters, where client-server models are generally used to access remote storage (for instance NFS, PVFS or LUSTRE). Our experimental study based on the usage of the GM programming interface of MYRINET high-speed networks for distributed storage raised several interesting problems. Firstly, the specific memory utilization in the storage access system layers does not easily fit the traditional memory model of high-speed networks. Secondly, client-server models that are used for distributed storage have specific requirements on message control and event processing, which are not handled by existing interfaces. We propose different solutions to solve communication control problems at the filesystem level. We show that a modification of the network programming interface is required. Data transfer issues need an adaptation of the operating system. We detail several propositions for network programming interfaces which make their utilization easier in the context of distributed storage. The integration of a flexible processing of data transfer in the new programming interface MYRINET/MX is finally presented. Performance evaluations show that its usage in the context of both storage and other types of applications is easy and efficient.Keywords: distributed storage, remote file access, cluster, high-speed network, MYRINET, zero-copy, memory registration, communication control, event notification, application programming interface
Procedia PDF Downloads 21955 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 14154 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 24753 Functional Neurocognitive Imaging (fNCI): A Diagnostic Tool for Assessing Concussion Neuromarker Abnormalities and Treating Post-Concussion Syndrome in Mild Traumatic Brain Injury Patients
Authors: Parker Murray, Marci Johnson, Tyson S. Burnham, Alina K. Fong, Mark D. Allen, Bruce McIff
Abstract:
Purpose: Pathological dysregulation of Neurovascular Coupling (NVC) caused by mild traumatic brain injury (mTBI) is the predominant source of chronic post-concussion syndrome (PCS) symptomology. fNCI has the ability to localize dysregulation in NVC by measuring blood-oxygen-level-dependent (BOLD) signaling during the performance of fMRI-adapted neuropsychological evaluations. With fNCI, 57 brain areas consistently affected by concussion were identified as PCS neural markers, which were validated on large samples of concussion patients and healthy controls. These neuromarkers provide the basis for a computation of PCS severity which is referred to as the Severity Index Score (SIS). The SIS has proven valuable in making pre-treatment decisions, monitoring treatment efficiency, and assessing long-term stability of outcomes. Methods and Materials: After being scanned while performing various cognitive tasks, 476 concussed patients received an SIS score based on the neural dysregulation of the 57 previously identified brain regions. These scans provide an objective measurement of attentional, subcortical, visual processing, language processing, and executive functioning abilities, which were used as biomarkers for post-concussive neural dysregulation. Initial SIS scores were used to develop individualized therapy incorporating cognitive, occupational, and neuromuscular modalities. These scores were also used to establish pre-treatment benchmarks and measure post-treatment improvement. Results: Changes in SIS were calculated in percent change from pre- to post-treatment. Patients showed a mean improvement of 76.5 percent (σ= 23.3), and 75.7 percent of patients showed at least 60 percent improvement. Longitudinal reassessment of 24 of the patients, measured an average of 7.6 months post-treatment, shows that SIS improvement is maintained and improved, with an average of 90.6 percent improvement from their original scan. Conclusions: fNCI provides a reliable measurement of NVC allowing for identification of concussion pathology. Additionally, fNCI derived SIS scores direct tailored therapy to restore NVC, subsequently resolving chronic PCS resulting from mTBI.Keywords: concussion, functional magnetic resonance imaging (fMRI), neurovascular coupling (NVC), post-concussion syndrome (PCS)
Procedia PDF Downloads 35352 Simscape Library for Large-Signal Physical Network Modeling of Inertial Microelectromechanical Devices
Authors: S. Srinivasan, E. Cretu
Abstract:
The information flow (e.g. block-diagram or signal flow graph) paradigm for the design and simulation of Microelectromechanical (MEMS)-based systems allows to model MEMS devices using causal transfer functions easily, and interface them with electronic subsystems for fast system-level explorations of design alternatives and optimization. Nevertheless, the physical bi-directional coupling between different energy domains is not easily captured in causal signal flow modeling. Moreover, models of fundamental components acting as building blocks (e.g. gap-varying MEMS capacitor structures) depend not only on the component, but also on the specific excitation mode (e.g. voltage or charge-actuation). In contrast, the energy flow modeling paradigm in terms of generalized across-through variables offers an acausal perspective, separating clearly the physical model from the boundary conditions. This promotes reusability and the use of primitive physical models for assembling MEMS devices from primitive structures, based on the interconnection topology in generalized circuits. The physical modeling capabilities of Simscape have been used in the present work in order to develop a MEMS library containing parameterized fundamental building blocks (area and gap-varying MEMS capacitors, nonlinear springs, displacement stoppers, etc.) for the design, simulation and optimization of MEMS inertial sensors. The models capture both the nonlinear electromechanical interactions and geometrical nonlinearities and can be used for both small and large signal analyses, including the numerical computation of pull-in voltages (stability loss). Simscape behavioral modeling language was used for the implementation of reduced-order macro models, that present the advantage of a seamless interface with Simulink blocks, for creating hybrid information/energy flow system models. Test bench simulations of the library models compare favorably with both analytical results and with more in-depth finite element simulations performed in ANSYS. Separate MEMS-electronic integration tests were done on closed-loop MEMS accelerometers, where Simscape was used for modeling the MEMS device and Simulink for the electronic subsystem.Keywords: across-through variables, electromechanical coupling, energy flow, information flow, Matlab/Simulink, MEMS, nonlinear, pull-in instability, reduced order macro models, Simscape
Procedia PDF Downloads 13351 Impact of Climate Change on Flow Regime in Himalayan Basins, Nepal
Authors: Tirtha Raj Adhikari, Lochan Prasad Devkota
Abstract:
This research studied the hydrological regime of three glacierized river basins in Khumbu, Langtang and Annapurna regions of Nepal using the Hydraologiska Byrans Vattenbalansavde (HBV), HVB-light 3.0 model. Future scenario of discharge is also studied using downscaled climate data derived from statistical downscaling method. General Circulation Models (GCMs) successfully simulate future climate variability and climate change on a global scale; however, poor spatial resolution constrains their application for impact studies at a regional or a local level. The dynamically downscaled precipitation and temperature data from Coupled Global Circulation Model 3 (CGCM3) was used for the climate projection, under A2 and A1B SRES scenarios. In addition, the observed historical temperature, precipitation and discharge data were collected from 14 different hydro-metrological locations for the implementation of this study, which include watershed and hydro-meteorological characteristics, trends analysis and water balance computation. The simulated precipitation and temperature were corrected for bias before implementing in the HVB-light 3.0 conceptual rainfall-runoff model to predict the flow regime, in which Groups Algorithms Programming (GAP) optimization approach and then calibration were used to obtain several parameter sets which were finally reproduced as observed stream flow. Except in summer, the analysis showed that the increasing trends in annual as well as seasonal precipitations during the period 2001 - 2060 for both A2 and A1B scenarios over three basins under investigation. In these river basins, the model projected warmer days in every seasons of entire period from 2001 to 2060 for both A1B and A2 scenarios. These warming trends are higher in maximum than in minimum temperatures throughout the year, indicating increasing trend of daily temperature range due to recent global warming phenomenon. Furthermore, there are decreasing trends in summer discharge in Langtang Khola (Langtang region) which is increasing in Modi Khola (Annapurna region) as well as Dudh Koshi (Khumbu region) river basin. The flow regime is more pronounced during later parts of the future decades than during earlier parts in all basins. The annual water surplus of 1419 mm, 177 mm and 49 mm are observed in Annapurna, Langtang and Khumbu region, respectively.Keywords: temperature, precipitation, water discharge, water balance, global warming
Procedia PDF Downloads 34350 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries
Authors: Ahmed Elaksher
Abstract:
Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.Keywords: UAV, photogrammetry, SfM, DEM
Procedia PDF Downloads 29249 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation
Authors: W. Meron Mebrahtu, R. Absi
Abstract:
Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.Keywords: accuracy, eddy viscosity, sewers, velocity profile
Procedia PDF Downloads 11048 A Feature Clustering-Based Sequential Selection Approach for Color Texture Classification
Authors: Mohamed Alimoussa, Alice Porebski, Nicolas Vandenbroucke, Rachid Oulad Haj Thami, Sana El Fkihi
Abstract:
Color and texture are highly discriminant visual cues that provide an essential information in many types of images. Color texture representation and classification is therefore one of the most challenging problems in computer vision and image processing applications. Color textures can be represented in different color spaces by using multiple image descriptors which generate a high dimensional set of texture features. In order to reduce the dimensionality of the feature set, feature selection techniques can be used. The goal of feature selection is to find a relevant subset from an original feature space that can improve the accuracy and efficiency of a classification algorithm. Traditionally, feature selection is focused on removing irrelevant features, neglecting the possible redundancy between relevant ones. This is why some feature selection approaches prefer to use feature clustering analysis to aid and guide the search. These techniques can be divided into two categories. i) Feature clustering-based ranking algorithm uses feature clustering as an analysis that comes before feature ranking. Indeed, after dividing the feature set into groups, these approaches perform a feature ranking in order to select the most discriminant feature of each group. ii) Feature clustering-based subset search algorithms can use feature clustering following one of three strategies; as an initial step that comes before the search, binded and combined with the search or as the search alternative and replacement. In this paper, we propose a new feature clustering-based sequential selection approach for the purpose of color texture representation and classification. Our approach is a three step algorithm. First, irrelevant features are removed from the feature set thanks to a class-correlation measure. Then, introducing a new automatic feature clustering algorithm, the feature set is divided into several feature clusters. Finally, a sequential search algorithm, based on a filter model and a separability measure, builds a relevant and non redundant feature subset: at each step, a feature is selected and features of the same cluster are removed and thus not considered thereafter. This allows to significantly speed up the selection process since large number of redundant features are eliminated at each step. The proposed algorithm uses the clustering algorithm binded and combined with the search. Experiments using a combination of two well known texture descriptors, namely Haralick features extracted from Reduced Size Chromatic Co-occurence Matrices (RSCCMs) and features extracted from Local Binary patterns (LBP) image histograms, on five color texture data sets, Outex, NewBarktex, Parquet, Stex and USPtex demonstrate the efficiency of our method compared to seven of the state of the art methods in terms of accuracy and computation time.Keywords: feature selection, color texture classification, feature clustering, color LBP, chromatic cooccurrence matrix
Procedia PDF Downloads 13447 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization
Authors: Aitor Bilbao, Dragos Axinte, John Billingham
Abstract:
The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation
Procedia PDF Downloads 27546 Gendered Experiences of the Urban Space in India as Portrayed by Hindi Cinema: A Quantitative Analysis
Authors: Hugo Ribadeau Dumas
Abstract:
In India, cities represent intense battlefields where patriarchal norms are simultaneously defied and reinforced. While Indian metropolises have witnessed numerous initiatives where women boldly claimed their right to the city, urban spaces still remain disproportionately unfriendly to female city-dwellers. As a result, the presence of strees (women, in Hindi) in the streets remains a socially and politically potent phenomenon. This paper explores how, in India, women engage with the city as compared to men. Borrowing analytical tools from urban geography, it uses Hindi cinema as a medium to map the extent to which activities, attitudes and experiences in urban spaces are highly gendered. The sample consists of 30 movies, both mainstream and independent, which were released between 2010 and 2020, were set in an urban environment and comprised at least one pivotal female character. The paper adopts a quantitative approach, consisting of the scrutiny of close to 3,000 minutes of footage, the labeling and time count of every scene, and the computation of regressions to identify statistical relationships between characters and the way they navigate the city. According to the analysis, female characters spend half less time in the public space than their male counterparts. When they do step out, women do it mostly for utilitarian reasons; inversely, in private spaces or in pseudo-public commercial places – like malls – they indulge in fun activities. For male characters, the pattern is the exact opposite: fun takes place in public and serious work in private. The characters’ attitudes in the streets are also greatly gendered: men spend a significant amount of time immobile, loitering, while women are usually on the move, displaying some sense of purpose. Likewise, body language and emotional expressiveness betray differentiated gender scripts: while women wander in the streets either smiling – in a charming role – or with a hostile face – in a defensive mode – men are more likely to adopt neutral facial expressions. These trends were observed across all movies, although some nuances were identified depending on the character's age group, social background, and city, highlighting that the urban experience is not the same for all women. The empirical pieces of evidence presented in this study are helpful to reflect on the meaning of public space in the context of contemporary Indian cities. The paper ends with a discussion on the link between universal access to public spaces and women's empowerment.Keywords: cinema, Indian cities, public space, women empowerment
Procedia PDF Downloads 15545 Temperature Dependence of Photoluminescence Intensity of Europium Dinuclear Complex
Authors: Kwedi L. M. Nsah, Hisao Uchiki
Abstract:
Quantum computation is a new and exciting field making use of quantum mechanical phenomena. In classical computers, information is represented as bits, with values either 0 or 1, but a quantum computer uses quantum bits in an arbitrary superposition of 0 and 1, enabling it to reach beyond the limits predicted by classical information theory. lanthanide ion quantum computer is an organic crystal, having a lanthanide ion. Europium is a favored lanthanide, since it exhibits nuclear spin coherence times, and Eu(III) is photo-stable and has two stable isotopes. In a europium organic crystal, the key factor is the mutual dipole-dipole interaction between two europium atoms. Crystals of the complex were formed by making a 2 :1 reaction of Eu(fod)3 and bpm. The transparent white crystals formed showed brilliant red luminescence with a 405 nm laser. The photoluminescence spectroscopy was observed both at room and cryogenic temperatures (300-14 K). The luminescence spectrum of [Eu(fod)3(μ-bpm) Eu(fod)3] showed characteristic of Eu(III) emission transitions in the range 570–630 nm, due to the deactivation of 5D0 emissive state to 7Fj. For the application of dinuclear Eu3+ complex to q-bit device, attention was focused on 5D0 -7F0 transition, around 580 nm. The presence of 5D0 -7F0 transition at room temperature revealed that at least one europium symmetry had no inversion center. Since the line was unsplit by the crystal field effect, any multiplicity observed was due to a multiplicity of Eu3+ sites. For q-bit element, more narrow line width of 5D0 → 7F0 PL band in Eu3+ ion was preferable. Cryogenic temperatures (300 K – 14 K) was applicable to reduce inhomogeneous broadening and distinguish between ions. A CCD image sensor was used for low temperature Photoluminescence measurement, and a far better resolved luminescent spectrum was gotten by cooling the complex at 14 K. A red shift by 15 cm-1 in the 5D0 - 7F0 peak position was observed upon cooling, the line shifted towards lower wavenumber. An emission spectrum at the 5D0 - 7F0 transition region was obtained to verify the line width. At this temperature, a peak with magnitude three times that at room temperature was observed. The temperature change of the 5D0 state of Eu(fod)3(μ-bpm)Eu(fod)3 showed a strong dependence in the vicinity of 60 K to 100 K. Thermal quenching was observed at higher temperatures than 100 K, at which point it began to decrease slowly with increasing temperature. The temperature quenching effect of Eu3+ with increase temperature was caused by energy migration. 100 K was the appropriate temperature for the observation of the 5D0 - 7F0 emission peak. Europium dinuclear complex bridged by bpm was successfully prepared and monitored at cryogenic temperatures. At 100 K the Eu3+-dope complex has a good thermal stability and this temperature is appropriate for the observation of the 5D0 - 7F0 emission peak. Sintering the sample above 600o C could also be a method to consider but the Eu3+ ion can be reduced to Eu2+, reasons why cryogenic temperature measurement is preferably over other methods.Keywords: Eu(fod)₃, europium dinuclear complex, europium ion, quantum bit, quantum computer, 2, 2-bipyrimidine
Procedia PDF Downloads 18044 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 15543 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier
Abstract:
Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube
Procedia PDF Downloads 15342 Evaluating the Teaching and Learning Value of Tablets
Authors: Willem J. A. Louw
Abstract:
The wave of new advanced computing technology that has been developed during the recent past has significantly changed the way we communicate, collaborate and collect information. It has created a new technology environment and paradigm in which our children and students grow-up and this impacts on their learning. Research confirmed that Generation Y students have a preference for learning in the new technology environment. The challenge or question is: How do we adjust our teaching and learning to make the most of these changes. The complexity of effective and efficient teaching and learning must not be underestimated and changes must be preceded by proper objective research to prevent any haphazard developments that could do more harm than benefit. A blended learning approach has been used in the Forestry department for a few numbers of years including the use of electronic-peer assisted learning (e-pal) in a fixed-computer set-up within a learning management system environment. It was decided to extend the investigation and do some exploratory research by using a range of different Tablet devices. For this purpose, learning activities or assignments were designed to cover aspects of communication, collaboration and collection of information. The Moodle learning management system was used to present normal module information, to communicate with students and for feedback and data collection. Student feedback was collected by using an online questionnaire and informal discussions. The research project was implemented in 2013, 2014 and 2015 amongst first and third-year students doing a forestry three-year technical tertiary qualification in commercial plantation management. In general, more than 80% of the students alluded to that the device was very useful in their learning environment while the rest indicated that the devices were not very useful. More than ninety percent of the students acknowledged that they would like to continue using the devices for all of their modules whilst the rest alluded to functioning efficiently without the devices. Results indicated that information collection (access to resources) was rated the highest advantageous factor followed by communication and collaboration. The main general advantages of using Tablets were listed by the students as being mobility (portability), 24/7 access to learning material and information of any kind on a user friendly device in a Wi-Fi environment, fast computing process speeds, saving time, effort and airtime through skyping and e-mail, and use of various applications. Ownership of the device is a critical factor while the risk was identified as a major potential constraint. Significant differences were reported between the different types and quality of Tablets. The preferred types are those with a bigger screen and the ones with overall better functionality and quality features. Tablets significantly increase the collaboration, communication and information collection needs of the students. It does, however, not replace the need of a computer/laptop because of limited storage and computation capacity, small screen size and inefficient typing.Keywords: tablets, teaching, blended learning, tablet quality
Procedia PDF Downloads 24741 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus
Authors: Yesim Tumsek, Erkan Celebi
Abstract:
In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus
Procedia PDF Downloads 26640 Community Perception towards the Major Drivers for Deforestation and Land Degradation of Choke Afro-alpine and Sub-afro alpine Ecosystem, Northwest Ethiopia
Authors: Zelalem Teshager
Abstract:
The Choke Mountains have several endangered and endemic wildlife species and provide important ecosystem services. Despite their environmental importance, the Choke Mountains are found in dangerous conditions. This raised the need for an evaluation of the community's perception of deforestation and its major drivers and suggested possible solutions in the Choke Mountains of northwestern Ethiopia. For this purpose, household surveys, key informant interviews, and focus group discussions were used. A total sample of 102 informants was used for this survey. A purposive sampling technique was applied to select the participants for in-depth interviews and focus group discussions. Both qualitative and quantitative data analyses were used. Computation of descriptive statistics such as mean, percentages, frequency, tables, figures, and graphs was applied to organize, analyze, and interpret the study. This study assessed smallholder agricultural land expansion, Fuel wood collection, population growth; encroachment, free grazing, high demand of construction wood, unplanned resettlement, unemployment, border conflict, lack of a strong forest protecting system, and drought were the serious causes of forest depletion reported by local communities. Loss of land productivity, Soil erosion, soil fertility decline, increasing wind velocity, rising temperature, and frequency of drought were the most perceived impacts of deforestation. Most of the farmers have a holistic understanding of forest cover change. Strengthening forest protection, improving soil and water conservation, enrichment planting, awareness creation, payment for ecosystem services, and zero grazing campaigns were mentioned as possible solutions to the current state of deforestation. Applications of Intervention measures, such as animal fattening, beekeeping, and fruit production can contribute to decreasing the deforestation causes and improve communities’ livelihood. In addition, concerted efforts of conservation will ensure that the forests’ ecosystems contribute to increased ecosystem services. The major drivers of deforestation should be addressed with government intervention to change dependency on forest resources, income sources of the people, and institutional set-up of the forestry sector. Overall, further reduction in anthropogenic pressure is urgent and crucial for the recovery of the afro-alpine vegetation and the interrelated endangered wildlife in the Choke Mountains.Keywords: choke afro-alpine, deforestation, drivers, intervention measures, perceptions
Procedia PDF Downloads 5339 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 11038 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach
Authors: Zhuoran Li, Guan Qin
Abstract:
A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method
Procedia PDF Downloads 17237 Topology Optimization Design of Transmission Structure in Flapping-Wing Micro Aerial Vehicle via 3D Printing
Authors: Zuyong Chen, Jianghao Wu, Yanlai Zhang
Abstract:
Flapping-wing micro aerial vehicle (FMAV) is a new type of aircraft by mimicking the flying behavior to that of small birds or insects. Comparing to the traditional fixed wing or rotor-type aircraft, FMAV only needs to control the motion of flapping wings, by changing the size and direction of lift to control the flight attitude. Therefore, its transmission system should be designed very compact. Lightweight design can effectively extend its endurance time, while engineering experience alone is difficult to simultaneously meet the requirements of FMAV for structural strength and quality. Current researches still lack the guidance of considering nonlinear factors of 3D printing material when carrying out topology optimization, especially for the tiny FMAV transmission system. The coupling of non-linear material properties and non-linear contact behaviors of FMAV transmission system is a great challenge to the reliability of the topology optimization result. In this paper, topology optimization design based on FEA solver package Altair Optistruct for the transmission system of FMAV manufactured by 3D Printing was carried out. Firstly, the isotropic constitutive behavior of the Ultraviolet (UV) Cureable Resin used to fabricate the structure of FMAV was evaluated and confirmed through tensile test. Secondly, a numerical computation model describing the mechanical behavior of FMAV transmission structure was established and verified by experiments. Then topology optimization modeling method considering non-linear factors were presented, and optimization results were verified by dynamic simulation and experiments. Finally, detail discussions of different load status and constraints were carried out to explore the leading factors affecting the optimization results. The contributions drawn from this article helpful for guiding the lightweight design of FMAV are summarizing as follow; first, a dynamic simulation modeling method used to obtain the load status is presented. Second, verification method of optimized results considering non-linear factors is introduced. Third, based on or can achieve a better weight reduction effect and improve the computational efficiency rather than taking multi-states into account. Fourth, basing on makes for improving the ability to resist bending deformation. Fifth, constraint of displacement helps to improve the structural stiffness of optimized result. Results and engineering guidance in this paper may shed lights on the structural optimization and light-weight design for future advanced FMAV.Keywords: flapping-wing micro aerial vehicle, 3d printing, topology optimization, finite element analysis, experiment
Procedia PDF Downloads 16736 A Study of Status of Women by Incorporating Literacy and Employment in India and Some Selected States
Authors: Barnali Thakuria, Labananda Choudhury
Abstract:
Gender equality and women’s empowerment is one of the components of eight Millennium Development Goal (MDG).Literacy and employment are the parameters which reflect the empowerment of women. But in a developing country like India, literacy and working status among the females are not satisfactory. Both literacy and employment technically can be measured by Literate Life Expectancy (LLE) and Working Life Expectancy (WLE).One can also combine both the factors literacy and working to get a better new measure. The proposed indicator can be called literate-working life expectancy (LWLE). LLE gives an average number of years a person lives in a literate state under current mortality and literacy conditions while WLE defined as average number of years a person lives in a working state if current mortality and working condition prevails. Similarly, LWLE gives number of expected years by a person living under both literate and working state. The situation of females cannot be figured out without comparing both the sexes. In the present paper an attempt has been made to estimate LLE and WLE in India along with some selected states from various zones of India namely Assam from the North-East, Gujarat from the West, Kerala from the South, Rajasthan from the North, Uttar Pradesh from the Central and West Bengal from the East respectively for both the sexes based on 2011 census. Furthermore, we have also developed a formula for a new indicator namely Literate-Working Life Expectancy (LWLE) and the proposed index has been applied in India and the selected states mentioned above for both males and females. Data has been extracted from SRS(Sample Registration System) based Abridged Life Table and Census of India. The computation of LLE follows the method developed by Lutz while WLE has followed the method developed by Saw Swee Hock. By combining both the factors literacy and employment, the new indicator LWLE also follows the method like LLE and WLE. Contrasted results have been found in different parts of India. The result shows that LLE at birth is highest(lowest) in the state Kerala(Uttar Pradesh) with 61.66 (39.51) years among the males. A similar situation is also observed among the females with 62.58 years and 25.11 years respectively. But male WLE at birth is highest (lowest) in Rajasthan(Kerala) with 37.11 (32.64) years. Highest female WLE at birth is also observed in Rajasthan with 23.51 years and the lowest is concentrated in Uttar Pradesh with 11.76 years. It is also found that Kerala’s performance is exceptionally good in terms of LWLE at birth while the lowest LWLE at birth prevails in the state Uttar Pradesh among the males. Female LWLE at birth is highest(lowest) in Kerala(Uttar Pradesh) with 19.73(4.77)years. The corresponding value of the index increases as the number of factors involved in the life expectancy decrease. It is found that women are lagging behind in terms of both literacy and employment. Findings of the study will help the planners to take necessary steps to improve the position of women.Keywords: life expectancy, literacy, literate life expectancy, working life expectancy
Procedia PDF Downloads 42035 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India
Authors: Rituparna Pal, Faiz Ahmed
Abstract:
Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters
Procedia PDF Downloads 17534 A Comprehensive Key Performance Indicators Dashboard for Emergency Medical Services
Authors: Giada Feletti, Daniela Tedesco, Paolo Trucco
Abstract:
The present study aims to develop a dashboard of Key Performance Indicators (KPI) to enhance information and predictive capabilities in Emergency Medical Services (EMS) systems, supporting both operational and strategic decisions of different actors. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning the indicators currently used for the performance measurement of EMS systems. From this literature analysis, it emerged that current studies focus on two distinct perspectives: the ambulance service, a fundamental component of pre-hospital health treatment, and the patient care in the Emergency Department (ED). The perspective proposed by this study is to consider an integrated view of the ambulance service process and the ED process, both essential to ensure high quality of care and patient safety. Thus, the proposal focuses on the entire healthcare service process and, as such, allows considering the interconnection between the two EMS processes, the pre-hospital and hospital ones, connected by the assignment of the patient to a specific ED. In this way, it is possible to optimize the entire patient management. Therefore, attention is paid to the dependency of decisions that in current EMS management models tend to be neglected or underestimated. In particular, the integration of the two processes enables the evaluation of the advantage of an ED selection decision having visibility on EDs’ saturation status and therefore considering the distance, the available resources and the expected waiting times. Starting from a critical review of the KPIs proposed in the extant literature, the design of the dashboard was carried out: the high number of analyzed KPIs was reduced by eliminating the ones firstly not in line with the aim of the study and then the ones supporting a similar functionality. The KPIs finally selected were tested on a realistic dataset, which draws us to exclude additional indicators due to the unavailability of data required for their computation. The final dashboard, which was discussed and validated by experts in the field, includes a variety of KPIs able to support operational and planning decisions, early warning, and citizens’ awareness of EDs accessibility in real-time. By associating each KPI to the EMS phase it refers to, it was also possible to design a well-balanced dashboard covering both efficiency and effective performance of the entire EMS process. Indeed, just the initial phases related to the interconnection between ambulance service and patient’s care are covered by traditional KPIs compared to the subsequent phases taking place in the hospital ED. This could be taken into consideration for the potential future development of the dashboard. Moreover, the research could proceed by building a multi-layer dashboard composed of the first level with a minimal set of KPIs to measure the basic performance of the EMS system at an aggregate level and further levels with KPIs that can bring additional and more detailed information.Keywords: dashboard, decision support, emergency medical services, key performance indicators
Procedia PDF Downloads 11133 Rotary Machine Sealing Oscillation Frequencies and Phase Shift Analysis
Authors: Liliia N. Butymova, Vladimir Ya Modorskii
Abstract:
To ensure the gas transmittal GCU's efficient operation, leakages through the labyrinth packings (LP) should be minimized. Leakages can be minimized by decreasing the LP gap, which in turn depends on thermal processes and possible rotor vibrations and is designed to ensure absence of mechanical contact. Vibration mitigation allows to minimize the LP gap. It is advantageous to research influence of processes in the dynamic gas-structure system on LP vibrations. This paper considers influence of rotor vibrations on LP gas dynamics and influence of the latter on the rotor structure within the FSI unidirectional dynamical coupled problem. Dependences of nonstationary parameters of gas-dynamic process in LP on rotor vibrations under various gas speeds and pressures, shaft rotation speeds and vibration amplitudes, and working medium features were studied. The programmed multi-processor ANSYS CFX was chosen as a numerical computation tool. The problem was solved using PNRPU high-capacity computer complex. Deformed shaft vibrations are replaced with an unyielding profile that moves in the fixed annulus "up-and-down" according to set harmonic rule. This solves a nonstationary gas-dynamic problem and determines time dependence of total gas-dynamic force value influencing the shaft. Pressure increase from 0.1 to 10 MPa causes growth of gas-dynamic force oscillation amplitude and frequency. The phase shift angle between gas-dynamic force oscillations and those of shaft displacement decreases from 3π/4 to π/2. Damping constant has maximum value under 1 MPa pressure in the gap. Increase of shaft oscillation frequency from 50 to 150 Hz under P=10 MPa causes growth of gas-dynamic force oscillation amplitude. Damping constant has maximum value at 50 Hz equaling 1.012. Increase of shaft vibration amplitude from 20 to 80 µm under P=10 MPa causes the rise of gas-dynamic force amplitude up to 20 times. Damping constant increases from 0.092 to 0.251. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the minimum gas-dynamic force persistent oscillating amplitude under P=0.1 MPa being observed in methane, and maximum in the air. Frequency remains almost unchanged and the phase shift in the air changes from 3π/4 to π/2. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the maximum gas-dynamic force oscillating amplitude under P=10 MPa being observed in methane, and minimum in the air. Air demonstrates surging. Increase of leakage speed from 0 to 20 m/s through LP under P=0.1 MPa causes the gas-dynamic force oscillating amplitude to decrease by 3 orders and oscillation frequency and the phase shift to increase 2 times and stabilize. Increase of leakage speed from 0 to 20 m/s in LP under P=1 MPa causes gas-dynamic force oscillating amplitude to decrease by almost 4 orders. The phase shift angle increases from π/72 to π/2. Oscillations become persistent. Flow rate proved to influence greatly on pressure oscillations amplitude and a phase shift angle. Work medium influence depends on operation conditions. At pressure growth, vibrations are mostly affected in methane (of working substances list considered), and at pressure decrease, in the air at 25 ˚С.Keywords: aeroelasticity, labyrinth packings, oscillation phase shift, vibration
Procedia PDF Downloads 29532 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions
Authors: Gaurangi Saxena, Ravindra Saxena
Abstract:
Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.Keywords: cloud computing, competitive advantage, customer relationship management, grid computing
Procedia PDF Downloads 31131 Design Flood Estimation in Satluj Basin-Challenges for Sunni Dam Hydro Electric Project, Himachal Pradesh-India
Authors: Navneet Kalia, Lalit Mohan Verma, Vinay Guleria
Abstract:
Introduction: Design Flood studies are essential for effective planning and functioning of water resource projects. Design flood estimation for Sunni Dam Hydro Electric Project located in State of Himachal Pradesh, India, on the river Satluj, was a big challenge in view of the river flowing in the Himalayan region from Tibet to India, having a large catchment area of varying topography, climate, and vegetation. No Discharge data was available for the part of the river in Tibet, whereas, for India, it was available only at Khab, Rampur, and Luhri. The estimation of Design Flood using standard methods was not possible. This challenge was met using two different approaches for upper (snow-fed) and lower (rainfed) catchment using Flood Frequency Approach and Hydro-metrological approach. i) For catchment up to Khab Gauging site (Sub-Catchment, C1), Flood Frequency approach was used. Around 90% of the catchment area (46300 sqkm) up to Khab is snow-fed which lies above 4200m. In view of the predominant area being snow-fed area, 1 in 10000 years return period flood estimated using Flood Frequency analysis at Khab was considered as Probable Maximum Flood (PMF). The flood peaks were taken from daily observed discharges at Khab, which were increased by 10% to make them instantaneous. Design Flood of 4184 cumec thus obtained was considered as PMF at Khab. ii) For catchment between Khab and Sunni Dam (Sub-Catchment, C2), Hydro-metrological approach was used. This method is based upon the catchment response to the rainfall pattern observed (Probable Maximum Precipitation - PMP) in a particular catchment area. The design flood computation mainly involves the estimation of a design storm hyetograph and derivation of the catchment response function. A unit hydrograph is assumed to represent the response of the entire catchment area to a unit rainfall. The main advantage of the hydro-metrological approach is that it gives a complete flood hydrograph which allows us to make a realistic determination of its moderation effect while passing through a reservoir or a river reach. These studies were carried out to derive PMF for the catchment area between Khab and Sunni Dam site using a 1-day and 2-day PMP values of 232 and 416 cm respectively. The PMF so obtained was 12920.60 cumec. Final Result: As the Catchment area up to Sunni Dam has been divided into 2 sub-catchments, the Flood Hydrograph for the Catchment C1 has been routed through the connecting channel reach (River Satluj) using Muskingum method and accordingly, the Design Flood was computed after adding the routed flood ordinates with flood ordinates of catchment C2. The total Design Flood (i.e. 2-Day PMF) with a peak of 15473 cumec was obtained. Conclusion: Even though, several factors are relevant while deciding the method to be used for design flood estimation, data availability and the purpose of study are the most important factors. Since, generally, we cannot wait for the hydrological data of adequate quality and quantity to be available, flood estimation has to be done using whatever data is available. Depending upon the type of data available for a particular catchment, the method to be used is to be selected.Keywords: design flood, design storm, flood frequency, PMF, PMP, unit hydrograph
Procedia PDF Downloads 32530 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 23029 The Role of Building Information Modeling as a Design Teaching Method in Architecture, Engineering and Construction Schools in Brazil
Authors: Aline V. Arroteia, Gustavo G. Do Amaral, Simone Z. Kikuti, Norberto C. S. Moura, Silvio B. Melhado
Abstract:
Despite the significant advances made by the construction industry in recent years, the crystalized absence of integration between the design and construction phases is still an evident and costly problem in building construction. Globally, the construction industry has sought to adopt collaborative practices through new technologies to mitigate impacts of this fragmented process and to optimize its production. In this new technological business environment, professionals are required to develop new methodologies based on the notion of collaboration and integration of information throughout the building lifecycle. This scenario also represents the industry’s reality in developing nations, and the increasing need for overall efficiency has demanded new educational alternatives at the undergraduate and post-graduate levels. In countries like Brazil, it is the common understanding that Architecture, Engineering and Building Construction educational programs are being required to review the traditional design pedagogical processes to promote a comprehensive notion about integration and simultaneity between the phases of the project. In this context, the coherent inclusion of computation design to all segments of the educational programs of construction related professionals represents a significant research topic that, in fact, can affect the industry practice. Thus, the main objective of the present study was to comparatively measure the effectiveness of the Building Information Modeling courses offered by the University of Sao Paulo, the most important academic institution in Brazil, at the Schools of Architecture and Civil Engineering and the courses offered in well recognized BIM research institutions, such as the School of Design in the College of Architecture of the Georgia Institute of Technology, USA, to evaluate the dissemination of BIM knowledge amongst students in post graduate level. The qualitative research methodology was developed based on the analysis of the program and activities proposed by two BIM courses offered in each of the above-mentioned institutions, which were used as case studies. The data collection instruments were a student questionnaire, semi-structured interviews, participatory evaluation and pedagogical practices. The found results have detected a broad heterogeneity of the students regarding their professional experience, hours dedicated to training, and especially in relation to their general knowledge of BIM technology and its applications. The research observed that BIM is mostly understood as an operational tool and not as methodological project development approach, relevant to the whole building life cycle. The present research offers in its conclusion an assessment about the importance of the incorporation of BIM, with efficiency and in its totality, as a teaching method in undergraduate and graduate courses in the Brazilian architecture, engineering and building construction schools.Keywords: building information modeling (BIM), BIM education, BIM process, design teaching
Procedia PDF Downloads 153