Search results for: transmission error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3767

Search results for: transmission error

2807 Analysis of Noise Environment and Acoustics Material in Residential Building

Authors: Heruanda Alviana Giska Barabah, Hilda Rasnia Hapsari

Abstract:

Acoustic phenomena create an acoustic interpretation condition that describes the characteristics of the environment. In urban areas, the tendency of heterogeneous and simultaneous human activity form a soundscape that is different from other regions, one of the characteristics of urban areas that developing the soundscape is the presence of vertical model houses or residential building. Activities both within the building and surrounding environment are able to make the soundscape with certain characteristics. The acoustics comfort of residential building becomes an important aspect, those demand lead the building features become more diverse. Initial steps in mapping acoustic conditions in a soundscape are important, this is the method to determine uncomfortable condition. Noise generated by road traffic, railway, and plane is an important consideration, especially for urban people, therefore the proper design of the building becomes very important as an effort to bring appropriate acoustics comfort. In this paper the authors developed noise mapping on the location of the residential building. Mapping done by taking some point referring to the noise source. The mapping result become the basis for modeling the acoustics wave interacted with the building model. Material selection is done based on literature study and modeling simulation using Insul by considering the absorption coefficient and Sound Transmission Class. The analysis of acoustics rays is ray tracing method using Comsol simulator software that can show the movement of acoustics rays and their interaction with a boundary. The result of this study can be used to consider boundary material in residential building as well as consideration for improving the acoustic quality in the acoustics zones that are formed.

Keywords: residential building, noise, absorption coefficient, sound transmission class, ray tracing

Procedia PDF Downloads 247
2806 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 427
2805 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 231
2804 Malaria Management among Dispensers in Drug Retail Outlets in Buea Community: An Assessment of Knowledge of Malaria and Antimalarial Drug Prescription and Dispensing Practices

Authors: Marcelus U. Ajonina, Deodata B. Ngonga, Kenric B. Ware, Carine K. Nfor

Abstract:

Background: Lack of knowledge of rational use of antimalarial drugs among dispensers is a serious problem, especially in areas of intense transmission, thus increasing the risk of resistance and adverse drug reactions. This study was aimed at assessing the knowledge of malaria as well as perception and dispensing practices of antimalarials among vendors in Buea community. Methods: A community-based cross-sectional survey of a random sample of 140 drug vendors living within the Buea community was conducted between March and June 2017. A questionnaire was designed to obtain information from drug vendors on the general knowledge of malaria as well as dispensing practices. Data were analyzed using SPSS Statistics 20.0 and were considered significant at p ≤ 0.05. Results: Knowledge of malaria symptoms, transmission, and prevention was reasonable among 55.8% (77) of the respondents. Only 33.6% (47) of the respondents could attribute the cause of malaria to protozoan of genus Plasmodium species. Of the 140 vendors, 115 (82.7%) prescribe antimalarial drugs. The knowledge of the national protocol was malaria case management among dispensers was 35.0%. Vendors in hospital/community pharmacies were 2.4 times (OR = 3.14, 95% CI: 4.14 - 8.74, p < 0.001) more knowledgeable about malaria treatment protocol than those of in drugstores. The prevalence of self-prescription of antimalarials was 39.3%. Self-prescription was significantly higher in drugstores than hospital/community pharmacies (p=0.004). In all, 56 (40.6%) of vendors showed good practices regarding antimalarial drug dispensing with the majority (51.7%) from community pharmacies (OR=2.27,95% CI: 1.13-4.56). Conclusion: Findings reveal moderate knowledge of malaria but poor prescription and dispensing practices of antimalarial drugs among vendors, thus indicating a need for routine monitoring and evaluation to prevent the emergence of resistant strains to current efficacious antimalarials.

Keywords: antimalarials, drug retail outlets, dispensing, drug resistance, prescription

Procedia PDF Downloads 136
2803 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements

Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck

Abstract:

This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.

Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow

Procedia PDF Downloads 135
2802 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network

Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin

Abstract:

The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.

Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake

Procedia PDF Downloads 64
2801 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping

Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.

Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM

Procedia PDF Downloads 94
2800 Ground Short Circuit Contributions of a MV Distribution Line Equipped with PWMSC

Authors: Mohamed Zellagui, Heba Ahmed Hassan

Abstract:

This paper proposes a new approach for the calculation of short-circuit parameters in the presence of Pulse Width Modulated based Series Compensator (PWMSC). PWMSC is a newly Flexible Alternating Current Transmission System (FACTS) device that can modulate the impedance of a transmission line through applying a variation to the duty cycle (D) of a train of pulses with fixed frequency. This results in an improvement of the system performance as it provides virtual compensation of distribution line impedance by injecting controllable apparent reactance in series with the distribution line. This controllable reactance can operate in both capacitive and inductive modes and this makes PWMSC highly effective in controlling the power flow and increasing system stability in the system. The purpose of this work is to study the impact of fault resistance (RF) which varies between 0 to 30 Ω on the fault current calculations in case of a ground fault and a fixed fault location. The case study is for a medium voltage (MV) Algerian distribution line which is compensated by PWMSC in the 30 kV Algerian distribution power network. The analysis is based on symmetrical components method which involves the calculations of symmetrical components of currents and voltages, without and with PWMSC in both cases of maximum and minimum duty cycle value for capacitive and inductive modes. The paper presents simulation results which are verified by the theoretical analysis.

Keywords: pulse width modulated series compensator (pwmsc), duty cycle, distribution line, short-circuit calculations, ground fault, symmetrical components method

Procedia PDF Downloads 500
2799 The Impact of Introspective Models on Software Engineering

Authors: Rajneekant Bachan, Dhanush Vijay

Abstract:

The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.

Keywords: software engineering, architectures, introspective models, operating systems

Procedia PDF Downloads 538
2798 The Per Capita Income, Energy production and Environmental Degradation: A Comprehensive Assessment of the existence of the Environmental Kuznets Curve Hypothesis in Bangladesh

Authors: Ashique Mahmud, MD. Ataul Gani Osmani, Shoria Sharmin

Abstract:

In the first quarter of the twenty-first century, the most substantial global concern is environmental contamination, and it has gained the prioritization of both the national and international community. Keeping in mind this crucial fact, this study conducted different statistical and econometrical methods to identify whether the gross national income of the country has a significant impact on electricity production from nonrenewable sources and different air pollutants like carbon dioxide, nitrous oxide, and methane emissions. Besides, the primary objective of this research was to analyze whether the environmental Kuznets curve hypothesis holds for the examined variables. After analyzing different statistical properties of the variables, this study came to the conclusion that the environmental Kuznets curve hypothesis holds for gross national income and carbon dioxide emission in Bangladesh in the short run as well as the long run. This study comes to this conclusion based on the findings of ordinary least square estimations, ARDL bound tests, short-run causality analysis, the Error Correction Model, and other pre-diagnostic and post-diagnostic tests that have been employed in the structural model. Moreover, this study wants to demonstrate that the outline of gross national income and carbon dioxide emissions is in its initial stage of development and will increase up to the optimal peak. The compositional effect will then force the emission to decrease, and the environmental quality will be restored in the long run.

Keywords: environmental Kuznets curve hypothesis, carbon dioxide emission in Bangladesh, gross national income in Bangladesh, autoregressive distributed lag model, granger causality, error correction model

Procedia PDF Downloads 150
2797 Electrochemical Growth and Properties of Cu2O Nanostructures

Authors: A. Azizi, S. Laidoudi, G. Schmerber, A. Dinia

Abstract:

Cuprous oxide (Cu2O) is a well-known oxide semiconductor with a band gap of 2.1 eV and a natural p-type conductivity, which is an attractive material for device applications because of its abundant availability, non toxicity, and low production cost. It has a higher absorption coefficient in the visible region and the minority carrier diffusion length is also suitable for use as a solar cell absorber layer and it has been explored in junction with n type ZnO for photovoltaic applications. Cu2O nanostructures have been made by a variety of techniques; the electrodeposition method has emerged as one of the most promising processing routes as it is particularly provides advantages such as a low-cost, low temperature and a high level of purity in the products. In this work, Cu2O nanostructures prepared by electrodeposition from aqueous cupric sulfate solution with citric acid at 65°C onto a fluorine doped tin oxide (FTO) coated glass substrates were investigated. The effects of deposition potential on the electrochemical, surface morphology, structural and optical properties of Cu2O thin films were investigated. During cyclic voltammetry experiences, the potential interval where the electrodeposition of Cu2O is carried out was established. The Mott–Schottky (M-S) plot demonstrates that all the films are p-type semiconductors, the flat-band potential and the acceptor density for the Cu2O thin films are determined. AFM images reveal that the applied potential has a very significant influence on the surface morphology and size of the crystallites of thin Cu2O. The XRD measurements indicated that all the obtained films display a Cu2O cubic structure with a strong preferential orientation of the (111) direction. The optical transmission spectra in the UV-Visible domains revealed the highest transmission (75 %), and their calculated gap values increased from 1.93 to 2.24 eV, with increasing potentials.

Keywords: Cu2O, electrodeposition, Mott–Schottky plot, nanostructure, optical properties, XRD

Procedia PDF Downloads 355
2796 Laser Writing on Vitroceramic Disks for Petabyte Data Storage

Authors: C. Busuioc, S. I. Jinga, E. Pavel

Abstract:

The continuous need of more non-volatile memories with a higher storage capacity, smaller dimensions and weight, as well as lower costs, has led to the exploration of optical lithography on active media, as well as patterned magnetic composites. In this context, optical lithography is a technique that can provide a significant decrease of the information bit size to the nanometric scale. However, there are some restrictions that arise from the need of breaking the optical diffraction limit. Major achievements have been obtained by employing a vitoceramic material as active medium and a laser beam operated at low power for the direct writing procedure. Thus, optical discs with ultra-high density were fabricated by a conventional melt-quenching method starting from analytical purity reagents. They were subsequently used for 3D recording based on their photosensitive features. Naturally, the next step consists in the elucidation of the composition and structure of the active centers, in correlation with the use of silver and rare-earth compounds for the synthesis of the optical supports. This has been accomplished by modern characterization methods, namely transmission electron microscopy coupled with selected area electron diffraction, scanning transmission electron microscopy and electron energy loss spectroscopy. The influence of laser diode parameters, silver concentration and fluorescent compounds formation on the writing process and final material properties was investigated. The results indicate performances in terms of capacity with two order of magnitude higher than other reported information storage systems. Moreover, the fluorescent photosensitive vitroceramics may be integrated in other applications which appeal to nanofabrication as the driving force in electronics and photonics fields.

Keywords: data storage, fluorescent compounds, laser writing, vitroceramics

Procedia PDF Downloads 225
2795 A Model for Language Intervention: Toys & Picture-Books as Early Pedagogical Props for the Transmission of Lazuri

Authors: Peri Ozlem Yuksel-Sokmen, Irfan Cagtay

Abstract:

Oral languages are destined to disappear rapidly in the absence of interventions aimed at encouraging their usage by young children. The seminal language preservation model proposed by Fishman (1991) stresses the importance of multiple generations using the endangered L1 while engaged in daily routines with younger children. Over the last two decades Fishman (2001) has used his intergenerational transmission model in documenting the revitalization of Basque languages, providing evidence that families are transmitting Euskara as a first language to their children with success. In our study, to motivate usage of Lazuri, we asked caregivers to speak the language while engaged with their toddlers (12 to 48 months) in semi-structured play, and included both parents (N=32) and grandparents (N=30) as play partners. This unnatural prompting to speak only in Lazuri was greeted with reluctance, as 90% of our families indicated that they had stopped using Lazuri with their children. Nevertheless, caregivers followed instructions and produced 67% of their utterances in Lazuri, with another 14% of utterances using a combination of Lazuri and Turkish (Codeswitch). Although children spoke mostly in Turkish (83% of utterances), frequencies of caregiver utterances in Lazuri or Codeswitch predicted the extent to which their children used the minority language in return. This trend suggests that home interventions aimed at encouraging dyads to communicate in a non-preferred, endangered language can effectively increase children’s usage of the language. Alternatively, this result suggests than any use of the minority language on the part of the children will promote its further usage by caregivers. For researchers examining links between play, culture, and child development, structured play has emerged as a critical methodology (e.g., Frost, Wortham, Reifel, 2007, Lilliard et al., 2012; Sutton-Smith, 1986; Gaskins & Miller, 2009), allowing investigation of cultural and individual variation in parenting styles, as well as the role of culture in constraining the affordances of toys. Toy props, as well as picture-books in native languages, can be used as tools in the transmission and preservation of endangered languages by allowing children to explore adult roles through enactment of social routines and conversational patterns modeled by caregivers. Through adult-guided play children not only acquire scripts for culturally significant activities, but also develop skills in expressing themselves in culturally relevant ways that may continue to develop over their lives through community engagement. Further pedagogical tools, such as language games and e-learning, will be discussed in this proposed oral talk.

Keywords: language intervention, pedagogical tools, endangered languages, Lazuri

Procedia PDF Downloads 330
2794 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning

Authors: Hong Zhang

Abstract:

The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.

Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning

Procedia PDF Downloads 144
2793 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 172
2792 Design of the Compliant Mechanism of a Biomechanical Assistive Device for the Knee

Authors: Kevin Giraldo, Juan A. Gallego, Uriel Zapata, Fanny L. Casado

Abstract:

Compliant mechanisms are designed to deform in a controlled manner in response to external forces, utilizing the flexibility of their components to store potential elastic energy during deformation, gradually releasing it upon returning to its original form. This article explores the design of a knee orthosis intended to assist users during stand-up motion. The orthosis makes use of a compliant mechanism to balance the user’s weight, thereby minimizing the strain on leg muscles during standup motion. The primary function of the compliant mechanism is to store and exchange potential energy, so when coupled with the gravitational potential of the user, the total potential energy variation is minimized. The design process for the semi-rigid knee orthosis involved material selection and the development of a numerical model for the compliant mechanism seen as a spring. Geometric properties are obtained through the numerical modeling of the spring once the desired stiffness and safety factor values have been attained. Subsequently, a 3D finite element analysis was conducted. The study demonstrates a strong correlation between the maximum stress in the mathematical model (250.22 MPa) and the simulation (239.8 MPa), with a 4.16% error. Both analyses safety factors: 1.02 for the mathematical approach and 1.1 for the simulation, with a consistent 7.84% margin of error. The spring’s stiffness, calculated at 90.82 Nm/rad analytically and 85.71 Nm/rad in the simulation, exhibits a 5.62% difference. These results suggest significant potential for the proposed device in assisting patients with knee orthopedic restrictions, contributing to ongoing efforts in advancing the understanding and treatment of knee osteoarthritis.

Keywords: biomechanics, complaint mechanisms, gonarthrosis, orthoses

Procedia PDF Downloads 36
2791 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 68
2790 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.

Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer

Procedia PDF Downloads 150
2789 Nanostructure Formation and Characterization of Eco-Friendly Banana Peels Nanosorbent

Authors: Opeyemi Atiba-Oyewo, Maurice S. Onya, Christian Wolkersdorfer

Abstract:

Nanostructure formation and characterization of eco-friendly banana peels nanosorbent are thoroughly described in this paper. The transformation of material during mechanical milling to enhance certain properties such as changes in microstructure and surface area to solve the current problems involving water pollution and water quality were studied. The mechanical milling was employed using planetary continuous milling machine and ethanol as process control agent, the sample were taken at time interval between 10 h to 30 h to examine the structural changes. The samples were characterised by X-ray diffraction (XRD), scanning electron microscopy (SEM), Fourier transform infra-red (FTIR), Transmission electron microscopy (TEM) and Brunauer Emmett and teller (BET). Results revealed that the three typical structures with different grain-size, lattice strain and shapes were observed, and the deformation mechanisms in these structures were found to be different, further particles fracturing results to surface area increment which was confirmed by Brunauer Emmett and teller (BET) analysis. X-ray diffraction (XRD) shows high densities of dislocations in large crystallites, implying that dislocation slip is the dominant deformation mechanism. Scanning electron microscopy revealed the morphological properties of the materials at different milling time, nanostructure of the particles and fibres were confirmed by Transmission electron microscopy and FT-IR identified the functional groups responsible for its capacity to coordinate and remove metal ions, such as the carboxylic and amine groups at absorption bands of 1730 and 889 cm-1, respectively. However, the choice of this sorbent material for the sorption of any contaminants will depend on the composition of the effluent to be treated.

Keywords: banana peels, eco-friendly, mechanical milling, nanosorbent, nanostructure water quality

Procedia PDF Downloads 255
2788 Stochastic Multicast Routing Protocol for Flying Ad-Hoc Networks

Authors: Hyunsun Lee, Yi Zhu

Abstract:

Wireless ad-hoc network is a decentralized type of temporary machine-to-machine connection that is spontaneous or impromptu so that it does not rely on any fixed infrastructure and centralized administration. As unmanned aerial vehicles (UAVs), also called drones, have recently become more accessible and widely utilized in military and civilian domains such as surveillance, search and detection missions, traffic monitoring, remote filming, product delivery, to name a few. The communication between these UAVs become possible and materialized through Flying Ad-hoc Networks (FANETs). However, due to the high mobility of UAVs that may cause different types of transmission interference, it is vital to design robust routing protocols for FANETs. In this talk, the multicast routing method based on a modified stochastic branching process is proposed. The stochastic branching process is often used to describe an early stage of an infectious disease outbreak, and the reproductive number in the process is used to classify the outbreak into a major or minor outbreak. The reproductive number to regulate the local transmission rate is adapted and modified for flying ad-hoc network communication. The performance of the proposed routing method is compared with other well-known methods such as flooding method and gossip method based on three measures; average reachability, average node usage and average branching factor. The proposed routing method achieves average reachability very closer to flooding method, average node usage closer to gossip method, and outstanding average branching factor among methods. It can be concluded that the proposed multicast routing scheme is more efficient than well-known routing schemes such as flooding and gossip while it maintains high performance.

Keywords: Flying Ad-hoc Networks, Multicast Routing, Stochastic Branching Process, Unmanned Aerial Vehicles

Procedia PDF Downloads 123
2787 Pregnancy Outcome in Women with HIV Infection from a Tertiary Care Centre of India

Authors: Kavita Khoiwal, Vatsla Dadhwal, K. Aparna Sharma, Dipika Deka, Plabani Sarkar

Abstract:

Introduction: About 2.4 million (1.93 - 3.04 million) people are living with HIV/AIDS in India. Of all HIV infections, 39% (9,30,000) are among women. 5.4% of infections are from mother to child transmission (MTCT), 25,000 infected children are born every year. Besides the risk of mother to child transmission of HIV, these women are at risk of the higher adverse pregnancy outcome. The objectives of the study were to compare the obstetric and neonatal outcome in women who are HIV positive with low-risk HIV negative women and effect of antiretroviral drugs on preterm birth and IUGR. Materials and Methods: This is a retrospective case record analysis of 212 HIV-positive women delivering between 2002 to 2015, in a tertiary health care centre which was compared with 238 HIV-negative controls. Women who underwent medical termination of pregnancy and abortion were excluded from the study. Obstetric outcome analyzed were pregnancy induced hypertension, HIV positive intrauterine growth restriction, preterm birth, anemia, gestational diabetes and intrahepatic cholestasis of pregnancy. Neonatal outcome analysed were birth weight, apgar score, NICU admission and perinatal transmission.HIV-positiveOut of 212 women, 204 received antiretroviral therapy (ART) to prevent MTCT, 27 women received single dose nevirapine (sdNVP) or sdNVP tailed with 7 days of zidovudine and lamivudine (ZDV + 3TC), 15 received ZDV, 82 women received duovir and 80 women received triple drug therapy depending upon the time period of presentation. Results: Mean age of 212 HIV positive women was 25.72+3.6 years, 101 women (47.6 %) were primigravida. HIV positive status was diagnosed during pregnancy in 200 women while 12 women were diagnosed prior to conception. Among 212 HIV positive women, 20 (9.4 %) women had preterm delivery (< 37 weeks), 194 women (91.5 %) delivered by cesarean section and 18 women (8.5 %) delivered vaginally. 178 neonates (83.9 %) received exclusive top feeding and 34 neonates (16.03 %) received exclusive breast feeding. When compared to low risk HIV negative women (n=238), HIV positive women were more likely to deliver preterm (OR 1.27), have anemia (OR 1.39) and intrauterine growth restriction (OR 2.07). Incidence of pregnancy induced hypertension, diabetes mellitus and ICP was not increased. Mean birth weight was significantly lower in HIV positive women (2593.60+499 gm) when compared to HIV negative women (2919+459 gm). Complete follow up is available for 148 neonates till date, rest are under evaluation. Out of these 7 neonates found to have HIV positive status. Risk of preterm birth (P value = 0.039) and IUGR (P value = 0.739) was higher in HIV positive women who did not receive any ART during pregnancy than women who received ART. Conclusion: HIV positive pregnant women are at increased risk of adverse pregnancy outcome. Multidisciplinary team approach and use of highly active antiretroviral therapy can optimize the maternal and perinatal outcome.

Keywords: antiretroviral therapy, HIV infection, IUGR, preterm birth

Procedia PDF Downloads 260
2786 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization

Procedia PDF Downloads 143
2785 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 418
2784 Electrophoretic Deposition of Ultrasonically Synthesized Nanostructured Conducting Poly(o-phenylenediamine)-Co-Poly(1-naphthylamine) Film for Detection of Glucose

Authors: Vaibhav Budhiraja, Chandra Mouli Pandey

Abstract:

The ultrasonic synthesis of nanostructured conducting copolymer is an effective technique to synthesize polymer with desired chemical properties. This tailored nanostructure, shows tremendous improvement in sensitivity and stability to detect a variety of analytes. The present work reports ultrasonically synthesized nanostructured conducting poly(o-phenylenediamine)-co-poly(1-naphthylamine) (POPD-co-PNA). The synthesized material has been characterized using Fourier transform infrared spectroscopy (FTIR), ultraviolet-visible spectroscopy, transmission electron microscopy, X-ray diffraction and cyclic voltammetry. FTIR spectroscopy confirmed random copolymerization, while UV-visible studies reveal the variation in polaronic states upon copolymerization. High crystallinity was achieved via ultrasonic synthesis which was confirmed by X-ray diffraction, and the controlled morphology of the nanostructures was confirmed by transmission electron microscopy analysis. Cyclic voltammetry shows that POPD-co-PNA has rather high electrochemical activity. This behavior was explained on the basis of variable orientations adopted by the conducting polymer chains. The synthesized material was electrophoretically deposited at onto indium tin oxide coated glass substrate which is used as cathode and parallel platinum plate as the counter electrode. The fabricated bioelectrode was further used for detection of glucose by crosslinking of glucose oxidase in the PODP-co-PNA film. The bioelectrode shows a surface-controlled electrode reaction with the electron transfer coefficient (α) of 0.72, charge transfer rate constant (ks) of 21.77 s⁻¹ and diffusion coefficient 7.354 × 10⁻¹⁵ cm²s⁻¹.

Keywords: conducting, electrophoretic, glucose, poly (o-phenylenediamine), poly (1-naphthylamine), ultrasonic

Procedia PDF Downloads 142
2783 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture

Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko

Abstract:

Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.

Keywords: classification, feature selection, texture analysis, tree algorithms

Procedia PDF Downloads 178
2782 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser

Authors: Edoardo Schinco

Abstract:

Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.

Keywords: Althusser, enlightenment, Gramsci, ideology

Procedia PDF Downloads 199
2781 Study on Shifting Properties of CVT Rubber V-belt

Authors: Natsuki Tsuda, Kiyotaka Obunai, Kazuya Okubo, Hideyuki Tashiro, Yoshinori Yamaji, Hideyuki Kato

Abstract:

The objective of this study is to investigate the effect of belt stiffness on the performance of the CVT unit, such as the required pulley thrust force and the ratio coverage. The CVT unit consists of the V-grooved pulleys and the rubber CVT belt. The width of the driving pulley groove was controlled by the stepper motor, while that of the driven pulley was controlled by the hydraulic pressure. The generated mechanical power on the motor was transmitted from the driving axis to the driven axis through the CVT unit. The rotational speed and the transmitting torque of both axes were measured by the tachometers and the torque meters attached with these axes, respectively. The transmitted, mechanical power was absorbed by the magnetic powder brake. The thrust force acting on both pulleys and the force between both shafts were measured by the load cell. The back face profile of the rubber CVT belt along with width direction was measured by the 2-dimensional laser displacement meter. This paper found that when the stiffness of the rubber CVT belt in the belt width direction was reduced, the thrust force required for shifting was reduced. Moreover, when the stiffness of the rubber CVT belt in the belt width direction was reduced, the ratio coverage of the CVT unit was reduced. Due to the decrement of stiffness in belt width direction, the excessive concave deformation of belt in pulley groove was confirmed. Because of this excessive concave deformation, apparent wrapping radius of belt would have been reduced. Proposed model could be effectively estimated the difference of ratio coverage due to concave deformation. The proposed model could also be utilized for designing the rubber CVT belt with optimal bending stiffness in width direction.

Keywords: CVT, countinuously variable transmission, rubber, belt stiffness, transmission

Procedia PDF Downloads 143
2780 Estimate Robert Gordon University's Scope Three Emissions by Nearest Neighbor Analysis

Authors: Nayak Amar, Turner Naomi, Gobina Edward

Abstract:

The Scottish Higher Education Institutes must report their scope 1 & 2 emissions, whereas reporting scope 3 is optional. Scope 3 is indirect emissions which embodies a significant component of total carbon footprint and therefore it is important to record, measure and report it accurately. Robert Gordon University (RGU) reported only business travel, grid transmission and distribution, water supply and transport, and recycling scope 3 emissions. This study estimates the RGUs total scope 3 emissions by comparing it with a similar HEI in scale. The scope 3 emission reporting of sixteen Scottish HEI was studied. Glasgow Caledonian University was identified as the nearest neighbour by comparing its students' full time equivalent, staff full time equivalent, research-teaching split, budget, and foundation year. Apart from the peer, data was also collected from the Higher Education Statistics Agency database. RGU reported emissions from business travel, grid transmission and distribution, water supply, and transport and recycling. This study estimated RGUs scope 3 emissions from procurement, student-staff commute, and international student trip. The result showed that RGU covered only 11% of the scope 3 emissions. The major contributor to scope 3 emissions were procurement (48%), student commute (21%), international student trip (16%), and staff commute (4%). The estimated scope 3 emission was more than 14 times the reported emissions. This study has shown the relative importance of each scope 3 emissions source, which gives a guideline for the HEIs, on where to focus their attention to capture maximum scope 3 emissions. Moreover, it has demonstrated that it is possible to estimate the scope 3 emissions with limited data.

Keywords: HEI, university, emission calculations, scope 3 emissions, emissions reporting

Procedia PDF Downloads 100
2779 Using Equipment Telemetry Data for Condition-Based maintenance decisions

Authors: John Q. Todd

Abstract:

Given that modern equipment can provide comprehensive health, status, and error condition data via built-in sensors, maintenance organizations have a new and valuable source of insight to take advantage of. This presentation will expose what these data payloads might look like and how they can be filtered, visualized, calculated into metrics, used for machine learning, and generate alerts for further action.

Keywords: condition based maintenance, equipment data, metrics, alerts

Procedia PDF Downloads 188
2778 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 121