Search results for: continuous speed profile data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29437

Search results for: continuous speed profile data

28957 Morphological Properties of Soil Profile of Vineyard of Bangalore North (GKVK Farm), Karnataka, India

Authors: Harsha B. R., K. S. Anil Kumar

Abstract:

A profile was dug at the University of Agricultural Sciences, Bangalore, where grapes were intensively cultivated for 25 years on the dimension of 1.5 × 1.5 × 1.5 m. Demarcation was done on the basis of texture, structure, colour, and the details like depth, texture, colour, consistency, rock fragments, presence of mottles, and structure were recorded and studied according to standard performa of soil profile description. Horizons noticed were Ap, Bt1, Bt2, Bt3, Bt4C, Bt5C and BC with respective depths of 0-13, 13-37, 37-60, 60-78, 78-104, 104-130 and 130-151+ cm. The reddish-brown colour was noticed in Ap, Bt1, and Bt2 horizons. The sub-angular blocky structure was observed in all the layers with slightly acid in reaction. Clear and abrupt smooth boundaries were present between two respective layers with clayey texture in all the horizons except the Ap horizon, which was clay loam in texture. Variegated soil colours and iron concretions were observed in Bt4, Bt5, and BC horizons. Clay skins were observed in Bt and BC horizons. Soils were of highly friable consistency for grapes cultivation.

Keywords: soil morphology, horizons, clay skins, consistency, vineyards

Procedia PDF Downloads 120
28956 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing

Authors: Abdullah Bal, Sevdenur Bal

Abstract:

This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.

Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware

Procedia PDF Downloads 495
28955 Checking Planetary Clutch on the Romania Tractor Using Mathematical Equations

Authors: Mohammad Vahedi Torshizi

Abstract:

In this investigation, at first, bending stress, contact stress, Safety factor of bending and Safety factor of contact between sun gear and planet gear tooth was determined using mathematical equations. Also, The amount of Sun Revolution in, Speed carrier, power Transmitted of the sun, sun torque, sun peripheral speed, Enter the tangential force gears, was calculated using mathematical equations. According to the obtained results, maximum of bending stress and contact stress occurred in three plantary and low status of four plantary. Also, maximum of Speed carrier, sun peripheral speed, Safety factor of bending and Safety factor of contact obtained in four plantary and maximum of power Transmitted of the sun, Enter the tangential force gears, bending stress and contact stress was in three pantry and factors And other factors were equal in the two planets.

Keywords: bending stress, contact stress, plantary, mathematical equations

Procedia PDF Downloads 277
28954 Representational Conference Profile of Secondary Students in Understanding Selected Chemical Principles

Authors: Ryan Villafuerte Lansangan

Abstract:

Assessing students’ understanding in the microscopic level of an abstract subject like chemistry poses a challenge to teachers. Literature reveals that the use of representations serves as an essential avenue of measuring the extent of understanding in the discipline as an alternative to traditional assessment methods. This undertaking explored the representational competence profile of high school students from the University of Santo Tomas High School in understanding selected chemical principles and correlate this with their academic profile in chemistry based on their performance in the academic achievement examination in chemistry administered by the Center for Education Measurement (CEM). The common misconceptions of the students on the selected chemistry principles based on their representations were taken into consideration as well as the students’ views regarding their understanding of the role of chemical representations in their learning. The students’ level of representation task instrument consisting of the main lessons in chemistry with a corresponding scoring guide was prepared and utilized in the study. The study revealed that most of the students under study are unanimously rated as Level 2 (symbolic level) in terms of their representational competence in understanding the selected chemical principles through the use of chemical representations. Alternative misrepresentations were most observed on the students’ representations on chemical bonding concepts while the concept of chemical equation appeared to be the most comprehensible topic in chemistry for the students. Data implies that teachers’ representations play an important role in helping the student understand the concept in a microscopic level. Results also showed that the academic achievement in the chemistry of the students based on the standardized CEM examination has a significant association with the students’ representational competence. In addition, the students’ responses on the students’ views in chemical representations questionnaire evidently showed a good understanding of what a chemical representation or a mental model is by drawing a negative response that these tools should be an exact replica. Moreover, the students confirmed a greater appreciation that chemical representations are explanatory tools.

Keywords: chemical representations, representational competence, academic profile in chemistry, secondary students

Procedia PDF Downloads 391
28953 Predicting Potential Protein Therapeutic Candidates from the Gut Microbiome

Authors: Prasanna Ramachandran, Kareem Graham, Helena Kiefel, Sunit Jain, Todd DeSantis

Abstract:

Microbes that reside inside the mammalian GI tract, commonly referred to as the gut microbiome, have been shown to have therapeutic effects in animal models of disease. We hypothesize that specific proteins produced by these microbes are responsible for this activity and may be used directly as therapeutics. To speed up the discovery of these key proteins from the big-data metagenomics, we have applied machine learning techniques. Using amino acid sequences of known epitopes and their corresponding binding partners, protein interaction descriptors (PID) were calculated, making a positive interaction set. A negative interaction dataset was calculated using sequences of proteins known not to interact with these same binding partners. Using Random Forest and positive and negative PID, a machine learning model was trained and used to predict interacting versus non-interacting proteins. Furthermore, the continuous variable, cosine similarity in the interaction descriptors was used to rank bacterial therapeutic candidates. Laboratory binding assays were conducted to test the candidates for their potential as therapeutics. Results from binding assays reveal the accuracy of the machine learning prediction and are subsequently used to further improve the model.

Keywords: protein-interactions, machine-learning, metagenomics, microbiome

Procedia PDF Downloads 361
28952 Pedestrian Behavioral Analysis for Safety at Road Crossing at Selected Intersections in Dhaka City

Authors: Sumit Roy

Abstract:

A clear understanding of pedestrian behaviour at road crossing at intersections is needed for providing necessary infrastructure and also for enhancing pedestrian safety at any intersection. Pedestrian road crossing behaviour is studied at Motijheel and Kakrail intersections where Motijheel intersection is a controlled roundabout, and Kakrail intersection is a signalized intersection. Around 60 people at each intersection were interviewed for a questionnaire survey and video recording at different time of a day was done for observation at each intersection. In case of Motijeel intersection, we got pedestrian road crossings were much higher than Kakrail intersection. It is because the number of workplaces here is higher than Kakrail. From questionnaire survey, it is found that 80% of pedestrians crosses at intersection to avail buses and their loading and unloading locations are at intersection, whereas at Kakrail intersection only 25% pedestrian crosses the road for buses as buses do not slow down here. At Motijheel intersection 25 to 40% of pedestrians choose to jump over the barricade for crossing instead of using overbridge for saving time and labour. On the other hand, the pedestrians using overbridge told that they use overbridge for safety. Moreover, pedestrian crosses at the same pace for both red and green interval with vehicle movement in the range of 12.5 to 14.5 km/h and gaps between vehicle were more than 4 m. Here pedestrian crossing speed varies from 3.5 to 7.2 km/h. In Kakrail intersection the road crossing situation can be classified into 4 categories. In case of red time, pedestrians do not wait to cross the road, and crossing speed varies from 3.5 to 7.2 km/h. When vehicle speed varies from 5.4 to 7.4 km/h, and gaps between vehicle vary from 1.5 to 2 m, most of the pedestrians initially choose to wait and try to cross the road in group with crossing speed 2.7 to 3.5 km/h. When vehicle speed varies from 10.8 to 18 km/h, and gaps between vehicles varies from 2 to 3 m most of the people waits and cross the road in group with crossing speed 3.5 to 5.4 km/h. When vehicle speed varies from 25.2 to 32.4 km/h and gaps between vehicles vary from 4 to 6 m most of the pedestrians choose to wait until red time. In Kakrail intersection 87% of people said that they cross the road with risk and 60% of pedestrians told that it is risky to get on and off the bus at this intersection. Planned location of loading and unloading area for buses can improve the pedestrian road crossing behaviour at intersections.

Keywords: crossing speed, pedestrian behaviour, road crossing, use of overbridge

Procedia PDF Downloads 164
28951 The Influence of Brands in E-Sports Spectators

Authors: Rene Kasper, Hyago Ribeiro, Marcelo Curth

Abstract:

Electronic sports, or just e-sports, boast an exponential growth in the interest of the public and large investors. The e-sports teams are equal to classic sports teams, like football, since in their structure they have, besides the athletes, administrators, coaches and even doctors. The concept of team games arises with a very strong social interaction, as it is perceived that users interact with real peers rather than competing with intelligent software. In this sense, electronic games are established as a sociocultural phenomenon and as multidimensional media. Thus, the research aims to identify the profile of users and the importance of brands in the Brazilian electronic sports scene, as well as the relationship of consumers (called fans) with the products and services that occupy the media spaces of the transmissions of sports championships. The research used descriptive quantitative methodology, applied in different e-sports communities, with 160 respondents. The data collection instrument was a survey containing seven questions, which addressed the profile of the participants and their perception on the proposed theme in research. Regarding the profile, the age ranged from 17 to 31 years, of which 93.3% were male and 6.7% female. It was found that 93.3% of the participants had contact with the Brazilian electronic sports scene for at least 2 years, of which 26.7% played between 6 and 12 hours a week and 46.7% played more than 12 hours a week. In addition, it was noticed that income was not a deciding factor to enjoy electronic sports games, because the percentage distribution of participants ranged from 1 to 3 minimum wages (33.3%) and greater than 6 salaries (46.7 %). Regarding the brands, 85.6% emphasized that brands should support the scenario through sponsorship and publicity and 28.6% are attracted to consume brands that advertise in e-sports championships.

Keywords: brands, consumer behavior, e-sports, virtual games

Procedia PDF Downloads 260
28950 Gene Expression Profile Reveals Breast Cancer Proliferation and Metastasis

Authors: Nandhana Vivek, Bhaskar Gogoi, Ayyavu Mahesh

Abstract:

Breast cancer metastasis plays a key role in cancer progression and fatality. The present study examines the potential causes of metastasis in breast cancer by investigating the novel interactions between genes and their pathways. The gene expression profile of GSE99394, GSE1246464, and GSE103865 was downloaded from the GEO data repository to analyze the differentially expressed genes (DEGs). Protein-protein interactions, target factor interactions, pathways and gene relationships, and functional enrichment networks were investigated. The proliferation pathway was shown to be highly expressed in breast cancer progression and metastasis in all three datasets. Gene Ontology analysis revealed 11 DEGs as gene targets to control breast cancer metastasis: LYN, DLGAP5, CXCR4, CDC6, NANOG, IFI30, TXP2, AGTR1, MKI67, and FTH1. Upon studying the function, genomic and proteomic data, and pathway involvement of the target genes, DLGAP5 proved to be a promising candidate due to it being highly differentially expressed in all datasets. The study takes a unique perspective on the avenues through which DLGAP5 promotes metastasis. The current investigation helps pave the way in understanding the role DLGAP5 plays in metastasis, which leads to an increased incidence of death among breast cancer patients.

Keywords: genomics, metastasis, microarray, cancer

Procedia PDF Downloads 86
28949 Investigation of Delivery of Triple Play Service in GE-PON Fiber to the Home Network

Authors: Anurag Sharma, Dinesh Kumar, Rahul Malhotra, Manoj Kumar

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 715
28948 On the Role of Cutting Conditions on Surface Roughness in High-Speed Thread Milling of Brass C3600

Authors: Amir Mahyar Khorasani, Ian Gibson, Moshe Goldberg, Mohammad Masoud Movahedi, Guy Littlefair

Abstract:

One of the important factors in manufacturing processes especially machining operations is surface quality. Improving this parameter results in improving fatigue strength, corrosion resistance, creep life and surface friction. The reliability and clearance of removable joints such as thread and nuts are highly related to the surface roughness. In this work, the effect of different cutting parameters such as cutting fluid pressure, feed rate and cutting speed on the surface quality of the crest of thread in the high-speed milling of Brass C3600 have been determined. Two popular neural networks containing MLP and RBF coupling with Taguchi L32 have been used to model surface roughness which was shown to be highly adept for such tasks. The contribution of this work is modelling surface roughness on the crest of the thread by using precise profilometer with nanoscale resolution. Experimental tests have been carried out for validation and approved suitable accuracy of the proposed model. Also analysing the interaction of parameters two by two showed that the most effective cutting parameter on the surface value is feed rate followed by cutting speed and cutting fluid pressure.

Keywords: artificial neural networks, cutting conditions, high-speed machining, surface roughness, thread milling

Procedia PDF Downloads 365
28947 Trading off Accuracy for Speed in Powerdrill

Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica

Abstract:

In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.

Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries

Procedia PDF Downloads 249
28946 Quantum Coherence Sets the Quantum Speed Limit for Mixed States

Authors: Debasis Mondal, Chandan Datta, S. K. Sazim

Abstract:

Quantum coherence is a key resource like entanglement and discord in quantum information theory. Wigner- Yanase skew information, which was shown to be the quantum part of the uncertainty, has recently been projected as an observable measure of quantum coherence. On the other hand, the quantum speed limit has been established as an important notion for developing the ultra-speed quantum computer and communication channel. Here, we show that both of these quantities are related. Thus, cast coherence as a resource to control the speed of quantum communication. In this work, we address three basic and fundamental questions. There have been rigorous attempts to achieve more and tighter evolution time bounds and to generalize them for mixed states. However, we are yet to know (i) what is the ultimate limit of quantum speed? (ii) Can we measure this speed of quantum evolution in the interferometry by measuring a physically realizable quantity? Most of the bounds in the literature are either not measurable in the interference experiments or not tight enough. As a result, cannot be effectively used in the experiments on quantum metrology, quantum thermodynamics, and quantum communication and especially in Unruh effect detection et cetera, where a small fluctuation in a parameter is needed to be detected. Therefore, a search for the tightest yet experimentally realisable bound is a need of the hour. It will be much more interesting if one can relate various properties of the states or operations, such as coherence, asymmetry, dimension, quantum correlations et cetera and QSL. Although, these understandings may help us to control and manipulate the speed of communication, apart from the particular cases like the Josephson junction and multipartite scenario, there has been a little advancement in this direction. Therefore, the third question we ask: (iii) Can we relate such quantities with QSL? In this paper, we address these fundamental questions and show that quantum coherence or asymmetry plays an important role in setting the QSL. An important question in the study of quantum speed limit may be how it behaves under classical mixing and partial elimination of states. This is because this may help us to choose properly a state or evolution operator to control the speed limit. In this paper, we try to address this question and show that the product of the time bound of the evolution and the quantum part of the uncertainty in energy or quantum coherence or asymmetry of the state with respect to the evolution operator decreases under classical mixing and partial elimination of states.

Keywords: completely positive trace preserving maps, quantum coherence, quantum speed limit, Wigner-Yanase Skew information

Procedia PDF Downloads 337
28945 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 253
28944 Distributed Perceptually Important Point Identification for Time Series Data Mining

Authors: Tak-Chung Fu, Ying-Kit Hung, Fu-Lai Chung

Abstract:

In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions.

Keywords: distributed computing, performance analysis, Perceptually Important Point identification, time series data mining

Procedia PDF Downloads 418
28943 Prediction of CO2 Concentration in the Korea Train Express (KTX) Cabins

Authors: Yong-Il Lee, Do-Yeon Hwang, Won-Seog Jeong, Duckshin Park

Abstract:

Recently, because of the high-speed trains forced ventilation, it is important to control the ventilation. The ventilation is for controlling various contaminants, temperature, and humidity. The high-speed train route is straight to a destination having a high speed. And there are many mountainous areas in Korea. So, tunnel rate is higher then other country. KTX HVAC block off the outdoor air, when entering tunnel. So the high tunnel rate is an effect of ventilation in the KTX cabin. It is important to reduction rate in CO2 concentration prediction. To meet the air quality of the public transport vehicles recommend standards, the KTX cabin of CO2 concentration should be managed. In this study, the concentration change was predicted by CO2 prediction simulation in route to be opened.

Keywords: CO2 prediction, KTX, ventilation, infrastructure and transportation engineering

Procedia PDF Downloads 530
28942 Sampled-Data Control for Fuel Cell Systems

Authors: H. Y. Jung, Ju H. Park, S. M. Lee

Abstract:

A sampled-data controller is presented for solid oxide fuel cell systems which is expressed by a sector bounded nonlinear model. The sector bounded nonlinear systems, which have a feedback connection with a linear dynamical system and nonlinearity satisfying certain sector type constraints. Also, the sampled-data control scheme is very useful since it is possible to handle digital controller and increasing research efforts have been devoted to sampled-data control systems with the development of modern high-speed computers. The proposed control law is obtained by solving a convex problem satisfying several linear matrix inequalities. Simulation results are given to show the effectiveness of the proposed design method.

Keywords: sampled-data control, fuel cell, linear matrix inequalities, nonlinear control

Procedia PDF Downloads 558
28941 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: aggregate data, combined-level data, individual patient data, meta-analysis

Procedia PDF Downloads 361
28940 mKDNAD: A Network Flow Anomaly Detection Method Based On Multi-teacher Knowledge Distillation

Authors: Yang Yang, Dan Liu

Abstract:

Anomaly detection models for network flow based on machine learning have poor detection performance under extremely unbalanced training data conditions and also have slow detection speed and large resource consumption when deploying on network edge devices. Embedding multi-teacher knowledge distillation (mKD) in anomaly detection can transfer knowledge from multiple teacher models to a single model. Inspired by this, we proposed a state-of-the-art model, mKDNAD, to improve detection performance. mKDNAD mine and integrate the knowledge of one-dimensional sequence and two-dimensional image implicit in network flow to improve the detection accuracy of small sample classes. The multi-teacher knowledge distillation method guides the train of the student model, thus speeding up the model's detection speed and reducing the number of model parameters. Experiments in the CICIDS2017 dataset verify the improvements of our method in the detection speed and the detection accuracy in dealing with the small sample classes.

Keywords: network flow anomaly detection (NAD), multi-teacher knowledge distillation, machine learning, deep learning

Procedia PDF Downloads 108
28939 An Insight into the Paddy Soil Denitrifying Bacteria and Their Relation with Soil Phospholipid Fatty Acid Profile

Authors: Meenakshi Srivastava, A. K. Mishra

Abstract:

This study characterizes the metabolic versatility of denitrifying bacterial communities residing in the paddy soil using the GC-MS based Phospholipid Fatty Acid (PLFA) analyses simultaneously with nosZ gene based PCR-DGGE (Polymerase Chain Reaction-Denaturing Gradient Gel Electrophoresis) and real time Q-PCR analysis. We have analyzed the abundance of nitrous oxide reductase (nosZ) genes, which was subsequently related to soil PLFA profile and DGGE based denitrifier community structure. Soil denitrifying bacterial community comprised majority or dominance of Ochrobactrum sp. following Cupriavidus and uncultured bacteria strains in paddy soil of selected sites. Initially, we have analyzed the abundance of the nitrous oxide reductase gene (nosZ), which was found to be related with PLFA based lipid profile. Chandauli of Eastern UP, India represented greater amount of lipid content (C18-C20) and denitrifier’s diversity. This study suggests the positive co-relation between soil PLFA profiles, DGGE, and Q-PCR data. Thus, a close networking among metabolic abilities and taxonomic composition of soil microbial communities existed, and subsequently, such work at greater extent could be helpful in managing nutrient dynamics as well as microbial dynamics of paddy soil ecosystem.

Keywords: denaturing gradient gel electrophoresis, DGGE, nitrifying and denitrifying bacteria, PLFA, Q-PCR

Procedia PDF Downloads 113
28938 Modeling the Risk Perception of Pedestrians Using a Nested Logit Structure

Authors: Babak Mirbaha, Mahmoud Saffarzadeh, Atieh Asgari Toorzani

Abstract:

Pedestrians are the most vulnerable road users since they do not have a protective shell. One of the most common collisions for them is pedestrian-vehicle at intersections. In order to develop appropriate countermeasures to improve safety for them, researches have to be conducted to identify the factors that affect the risk of getting involved in such collisions. More specifically, this study investigates factors such as the influence of walking alone or having a baby while crossing the street, the observable age of pedestrian, the speed of pedestrians and the speed of approaching vehicles on risk perception of pedestrians. A nested logit model was used for modeling the behavioral structure of pedestrians. The results show that the presence of more lanes at intersections and not being alone especially having a baby while crossing, decrease the probability of taking a risk among pedestrians. Also, it seems that teenagers show more risky behaviors in crossing the street in comparison to other age groups. Also, the speed of approaching vehicles was considered significant. The probability of risk taking among pedestrians decreases by increasing the speed of approaching vehicle in both the first and the second lanes of crossings.

Keywords: pedestrians, intersection, nested logit, risk

Procedia PDF Downloads 173
28937 Efficiency of Using E-Wallets as Payment Method in Marikina City During COVID-19 Pandemic

Authors: Noel Paolo Domingo, James Paul Menina, Laurente Ferrer

Abstract:

Most people were forced to stay at home and limit their physical contact during the COVID-19 pandemic. Due to the situation, strict implementation of government policies and safety protocols encouraged consumers to utilize cashless or digital transactions through e-wallets. In this study, the researchers aim to investigate the efficiency of using e-wallets as a payment method during the COVID-19 pandemic in Marikina City. The study examined the efficiency of e-wallets in terms of Usefulness, Convenience, and Safety and Security based on respondents’ assessment. Questionnaires developed by the researchers were distributed to a total of 400 e-wallet users in Marikina City aged 15 years old and above to gather data by using a purposive sampling technique. The data collected was processed using SPSS version 26. Frequency, percentage, and mean were utilized to describe the profile of respondents and their assessment of e-wallets in terms of the three constructs. ANOVA and t-tests were also employed to test the significant differences in the respondent’s assessment when the demographic profile was considered. The study revealed that when it comes to usefulness, e-wallet is efficient while in terms of convenience, and safety and security, e-wallet has been proven to be very efficient. During the COVID-19 pandemic, utilizing e-wallets has been embraced by most consumers. By enhancing its features, more people will be satisfied with using e-wallets.

Keywords: efficiency of e-wallets, usefulness, convenience, safety and security

Procedia PDF Downloads 120
28936 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan

Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad

Abstract:

Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.

Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules

Procedia PDF Downloads 91
28935 The Application of the Enterprise Systems through the Cloud Computing in Company: A Review and Suggestions

Authors: Mohanaad Talal Shakir, Saad AJAJ Khalaf, Nawar Ahmed Aljumaily, Mustafa Talal Shakere

Abstract:

Cloud computing is a model for enabling convenient, on demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. Objectives of this paper are to investigate the role of the Enterprise System and Cloud Computing Services to assist and guide him to ensure the initiative become successful. The cloud computing technology offers great potential for Enterprise such as the speed of dealing with data and product introductions, innovations and speed of response. The use of cloud computing technology leads to the rapid development and competitiveness of enterprises in various fields.

Keywords: cloud computing, information management, marketing, enterprise systems

Procedia PDF Downloads 666
28934 Assessment of Pedestrian Comfort in a Portuguese City Using Computational Fluid Dynamics Modelling and Wind Tunnel

Authors: Bruno Vicente, Sandra Rafael, Vera Rodrigues, Sandra Sorte, Sara Silva, Ana Isabel Miranda, Carlos Borrego

Abstract:

Wind comfort for pedestrians is an important condition in urban areas. In Portugal, a country with 900 km of coastline, the wind direction are predominantly from Nor-Northwest with an average speed of 2.3 m·s -1 (at 2 m height). As a result, a set of city authorities have been requesting studies of pedestrian wind comfort for new urban areas/buildings, as well as to mitigate wind discomfort issues related to existing structures. This work covers the efficiency evaluation of a set of measures to reduce the wind speed in an outdoor auditorium (open space) located in a coastal Portuguese urban area. These measures include the construction of barriers, placed at upstream and downstream of the auditorium, and the planting of trees, placed upstream of the auditorium. The auditorium is constructed in the form of a porch, aligned with North direction, driving the wind flow within the auditorium, promoting channelling effects and increasing its speed, causing discomfort in the users of this structure. To perform the wind comfort assessment, two approaches were used: i) a set of experiments using the wind tunnel (physical approach), with a representative mock-up of the study area; ii) application of the CFD (Computational Fluid Dynamics) model VADIS (numerical approach). Both approaches were used to simulate the baseline scenario and the scenarios considering a set of measures. The physical approach was conducted through a quantitative method, using hot-wire anemometer, and through a qualitative analysis (visualizations), using the laser technology and a fog machine. Both numerical and physical approaches were performed for three different velocities (2, 4 and 6 m·s-1 ) and two different directions (NorNorthwest and South), corresponding to the prevailing wind speed and direction of the study area. The numerical results show an effective reduction (with a maximum value of 80%) of the wind speed inside the auditorium, through the application of the proposed measures. A wind speed reduction in a range of 20% to 40% was obtained around the audience area, for a wind direction from Nor-Northwest. For southern winds, in the audience zone, the wind speed was reduced from 60% to 80%. Despite of that, for southern winds, the design of the barriers generated additional hot spots (high wind speed), namely, in the entrance to the auditorium. Thus, a changing in the location of the entrance would minimize these effects. The results obtained in the wind tunnel compared well with the numerical data, also revealing the high efficiency of the purposed measures (for both wind directions).

Keywords: urban microclimate, pedestrian comfort, numerical modelling, wind tunnel experiments

Procedia PDF Downloads 214
28933 Multidimensional Item Response Theory Models for Practical Application in Large Tests Designed to Measure Multiple Constructs

Authors: Maria Fernanda Ordoñez Martinez, Alvaro Mauricio Montenegro

Abstract:

This work presents a statistical methodology for measuring and founding constructs in Latent Semantic Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations present on Item Response Theory. More precisely, we propose initially reducing dimensionality with specific use of Principal Component Analysis for the linguistic data and then, producing axes of groups made from a clustering analysis of the semantic data. This approach allows the user to give meaning to previous clusters and found the real latent structure presented by data. The methodology is applied in a set of real semantic data presenting impressive results for the coherence, speed and precision.

Keywords: semantic analysis, factorial analysis, dimension reduction, penalized logistic regression

Procedia PDF Downloads 426
28932 Implications of Creating a 3D Vignette as a Reflective Practice for Continuous Professional Development of Foreign Language Teachers

Authors: Samiah H. Ghounaim

Abstract:

The topic of this paper is significant because of the increasing need for intercultural training for foreign language teachers due to the continuous challenges they face in their diverse classrooms. First, the structure of the intercultural training program designed will be briefly described, and the structure of a 3D vignette and its intended purposes will be elaborated on. This was the first stage where the program was designed and implemented on the period of three months with a group of local and expatriate foreign language teachers/practitioners at a university in the Middle East. After that, a set of primary data collected during the first stage of this research on the design and co-construction process of a 3D vignette will be reviewed and analysed in depth. Each practitioner designed a personal incident into a 3D vignette where each dimension of the vignette viewed the same incident from a totally different perspective. Finally, the results and the implications of having participant construct their personal incidents into a 3D vignette as a reflective practice will be discussed in detail as well as possible extensions for the research. This process proved itself to be an effective reflective practice where the participants were stimulated to view their incidents in a different light. Co-constructing one’s own critical incidents –be it a positive experience or not– into a structured 3D vignette encouraged participants to decentralise themselves from the incidents and, thus, creating a personal reflective space where they had the opportunity to see different potential outcomes for each incident, as well as prepare for the reflective discussion of their vignette with their peers. This provides implications for future developments in reflective writing practices and possibilities for educators’ continuous professional development (CPD).

Keywords: 3D vignettes, intercultural competence training, reflective practice, teacher training

Procedia PDF Downloads 96
28931 Anthropometric Profile as a Factor of Impact on Employee Productivity in Manufacturing Industry of Tijuana, Mexico

Authors: J. A. López, J. E. Olguín, C. W. Camargo, G. A. Quijano, R. Martínez

Abstract:

This paper presents an anthropometric study conducted to 300 employees in a maquiladora industry that belongs to the cluster of medical products as part of a research project to pretend simulate workplace conditions under which operators conduct their activities. This project is relevant because traditionally performed a study to design ergonomic workspaces according to anthropometric profile of users, however, this paper demonstrates the importance of making decisions when the infrastructure cannot be adapted for economic whichever put emphasis on user activity.

Keywords: anthropometry, biomechanics, design, ergonomics, productivity

Procedia PDF Downloads 448
28930 Temperature Distribution for Asphalt Concrete-Concrete Composite Pavement

Authors: Tetsya Sok, Seong Jae Hong, Young Kyu Kim, Seung Woo Lee

Abstract:

The temperature distribution for asphalt concrete (AC)-Concrete composite pavement is one of main influencing factor that affects to performance life of pavement. The temperature gradient in concrete slab underneath the AC layer results the critical curling stress and lead to causes de-bonding of AC-Concrete interface. These stresses, when enhanced by repetitive axial loadings, also contribute to the fatigue damage and eventual crack development within the slab. Moreover, the temperature change within concrete slab extremely causes the slab contracts and expands that significantly induces reflective cracking in AC layer. In this paper, the numerical prediction of pavement temperature was investigated using one-dimensional finite different method (FDM) in fully explicit scheme. The numerical predicted model provides a fundamental and clear understanding of heat energy balance including incoming and outgoing thermal energies in addition to dissipated heat in the system. By using the reliable meteorological data for daily air temperature, solar radiation, wind speech and variable pavement surface properties, the predicted pavement temperature profile was validated with the field measured data. Additionally, the effects of AC thickness and daily air temperature on the temperature profile in underlying concrete were also investigated. Based on obtained results, the numerical predicted temperature of AC-Concrete composite pavement using FDM provided a good accuracy compared to field measured data and thicker AC layer significantly insulates the temperature distribution in underlying concrete slab.

Keywords: asphalt concrete, finite different method (FDM), curling effect, heat transfer, solar radiation

Procedia PDF Downloads 256
28929 Exploring SSD Suitable Allocation Schemes Incompliance with Workload Patterns

Authors: Jae Young Park, Hwansu Jung, Jong Tae Kim

Abstract:

Whether the data has been well parallelized is an important factor in the Solid-State-Drive (SSD) performance. SSD parallelization is affected by allocation scheme and it is directly connected to SSD performance. There are dynamic allocation and static allocation in representative allocation schemes. Dynamic allocation is more adaptive in exploiting write operation parallelism, while static allocation is better in read operation parallelism. Therefore, it is hard to select the appropriate allocation scheme when the workload is mixed read and write operations. We simulated conditions on a few mixed data patterns and analyzed the results to help the right choice for better performance. As the results, if data arrival interval is long enough prior operations to be finished and continuous read intensive data environment static allocation is more suitable. Dynamic allocation performs the best on write performance and random data patterns.

Keywords: dynamic allocation, NAND flash based SSD, SSD parallelism, static allocation

Procedia PDF Downloads 330
28928 The Impact of Motor Predispositions of Pilot-Cadets on Results in Aviation Synthetic Efficiency Test

Authors: Zbigniew Wochynski, Justyna Skrzynska, Robert Jedrys, Zdzislaw Kobos

Abstract:

The aim of the study is to determine the types of motor skills and their impact on achieving results while undergoing Aviation Synthetic Efficiency Test (ASET). The study involved 59 cadets, 21 years-old on average, who are studying on first year for a pilot. The average weight of the respondents is 73.8 kg. The subjects were divided into two groups by weight: up to 73.8 kg -group A (n-30) and above 73,8kg -group B (n-29). All subjects underwent the following tests: running at 40m, 100m, 1000m, 2000m, pull-ups, ASET. In both groups, the cadets were divided into two motor skills types taking into advance 40 m running, pull-ups, 2000 meters running and then subjected to do ASET. There has been shown statistically significant increase in group B in body height, weight and BMI with p <0.0003, p <0.0001, p <0.0001 compared to group A. The results indicate that the dominant motor type in all subjects is the endurance-strength model, which reached the speed V = 1,42m/s in overcoming ASET. This is confirmed by the correlation between 2000m and pull-ups r = 0.37 (p <0.05). In group A, the results indicate that the dominant type of motor is a high-speed-endurance model (26.6%), which reached speed V = 1,42m/s in overcoming ASET. In Group B, there was type of motor speed-strength (20.6%), which reached speed of V = 1.45m/s in overcoming ASET. This confirms the correlation between ASET and pull-ups r = 0.56 (p <0.005). Examined cadets who were having one dominant characteristic achieved worse results is ASET. The best results from all examined cadets in overcoming ASET had the type of motor endurance-strength, in group A endurance-speed model and in group B type of speed-strength

Keywords: ASET, Aviation Synthetic Efficiency Test, motor skills, physical tests, pilot-cadets

Procedia PDF Downloads 276