Search results for: maximal data sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25282

Search results for: maximal data sets

22762 A Critical Analysis on Gaps Associated with Culture Policy Milieu Governing Traditional Male Circumcision in the Eastern Cape, South Africa

Authors: Thanduxolo Nomngcoyiya, Simon M. Kang’ethe

Abstract:

The paper aimed to critically analyse gaps pertaining to the cultural policy environments governing traditional male circumcision in the Eastern Cape as exemplified by an empirical case study. The original study which this paper is derived from utilized qualitative paradigm; and encompassed 28 participants. It used in-depth one-on-one interviews complemented by focus group discussions and key informants as a method of data collection. It also adopted interview guide as a data collection instrument. The original study was cross-sectional in nature, and the data was audio recorded and transcribed later during the data analysis and coding process. The study data analysis was content thematic analysis and identified the following key major findings on the culture of male circumcision policy: Lack of clarity on culture of male circumcision policy operations; Myths surrounding procedures on culture of male circumcision; Divergent views on cultural policies between government and male circumcision custodians; Unclear cultural policies on selection criteria of practitioners; and Lack of policy enforcement and implementation on transgressors of culture of male circumcision. It recommended: a stringent selection criteria of practitioners; a need to carry out death-free male circumcision; a need for male circumcision stakeholders to work with other culture and tradition-friendly stakeholders.

Keywords: human rights, policy enforcement, traditional male circumcision, traditional surgeons and nurses

Procedia PDF Downloads 280
22761 Modeling Local Warming Trend: An Application of Remote Sensing Technique

Authors: Khan R. Rahaman, Quazi K. Hassan

Abstract:

Global changes in climate, environment, economies, populations, governments, institutions, and cultures converge in localities. Changes at a local scale, in turn, contribute to global changes as well as being affected by them. Our hypothesis is built on a consideration that temperature does vary at local level (i.e., termed as local warming) in comparison to the predicted models at the regional and/or global scale. To date, the bulk of the research relating local places to global climate change has been top-down, from the global toward the local, concentrating on methods of impact analysis that use as a starting point climate change scenarios derived from global models, even though these have little regional or local specificity. Thus, our focus is to understand such trends over the southern Alberta, which will enable decision makers, scientists, researcher community, and local people to adapt their policies based on local level temperature variations and to act accordingly. Specific objectives in this study are: (i) to understand the local warming (temperature in particular) trend in context of temperature normal during the period 1961-2010 at point locations using meteorological data; (ii) to validate the data by using specific yearly data, and (iii) to delineate the spatial extent of the local warming trends and understanding influential factors to adopt situation by local governments. Existing data has brought the evidence of such changes and future research emphasis will be given to validate this hypothesis based on remotely sensed data (i.e. MODIS product by NASA).

Keywords: local warming, climate change, urban area, Alberta, Canada

Procedia PDF Downloads 330
22760 Development of Electroencephalograph Collection System in Language-Learning Self-Study System That Can Detect Learning State of the Learner

Authors: Katsuyuki Umezawa, Makoto Nakazawa, Manabu Kobayashi, Yutaka Ishii, Michiko Nakano, Shigeichi Hirasawa

Abstract:

This research aims to develop a self-study system equipped with an artificial teacher who gives advice to students by detecting the learners and to evaluate language learning in a unified framework. 'Detecting the learners' means that the system understands the learners' learning conditions, such as each learner’s degree of understanding, the difference in each learner’s thinking process, the degree of concentration or boredom in learning, and problem solving for each learner, which can be interpreted from learning behavior. In this paper, we propose a system to efficiently collect brain waves from learners by focusing on only the brain waves among the biological information for 'detecting the learners'. The conventional Electroencephalograph (EEG) measurement method during learning using a simple EEG has the following disadvantages. (1) The start and end of EEG measurement must be done manually by the experiment participant or staff. (2) Even when the EEG signal is weak, it may not be noticed, and the data may not be obtained. (3) Since the acquired EEG data is stored in each PC, there is a possibility that the time of data acquisition will be different in each PC. This time, we developed a system to collect brain wave data on the server side. This system overcame the above disadvantages.

Keywords: artificial teacher, e-learning, self-study system, simple EEG

Procedia PDF Downloads 131
22759 Characterization of Optical Communication Channels as Non-Deterministic Model

Authors: Valentina Alessandra Carvalho do Vale, Elmo Thiago Lins Cöuras Ford

Abstract:

Increasingly telecommunications sectors are adopting optical technologies, due to its ability to transmit large amounts of data over long distances. However, as in all systems of data transmission, optical communication channels suffer from undesirable and non-deterministic effects, being essential to know the same. Thus, this research allows the assessment of these effects, as well as their characterization and beneficial uses of these effects.

Keywords: optical communication, optical fiber, non-deterministic effects, telecommunication

Procedia PDF Downloads 778
22758 Rediscovery of Important Elements Contributing to Cultural Interchange Values Made during Restoration of Khanpur Gate

Authors: Poonam A. Trambadia, Ashish V. Trambadia

Abstract:

The architecture of sultanate period of Ahmedabad had evolved just before the establishment of Mughal rule in North India. After shifting the capital of the kingdom from Patan to Ahmedabad, when the buildings and structures were being built, an interesting cultural blend happened in architecture. Many sultanate buildings in Ahmedabad historic city have resemblance with Patan including the names. Outer fortification walls and Gates were built during the rule of the third ruler in the late 15th century. All the gates had sandstone slabs supported by three arched entrance in sandstone with wooden shutter. A restoration project of Khanpur Gate was initiated in 2016. The paper identifies some evidences and some hidden layers of structures as important elements of cultural interchange while some were just forgotten in the process. The recycling of pre-existing elements of structures are examined and compared. There were layers uncovered that were hidden behind later repairs using traditional brick arch, which was taken out in the process. As the gate had partially collapsed, the restoration included piece by piece dismantling and restoring in the same sequence wherever required. The recycled materials found in the process were recorded and provided the basis for this study. The gate after this discovery sets a new example of fortification Gate built in Sultanate era. The comparison excludes Maratha and British Period Gates to avoid further confusion and focuses on 15th – 16th century sultanate architecture of Ahmedabad.

Keywords: Ahmedabad World Heritage, fortification, Indo-Islamic style, Sultanate architecture, cultural interchange

Procedia PDF Downloads 106
22757 Liquefaction Potential Assessment Using Screw Driving Testing and Microtremor Data: A Case Study in the Philippines

Authors: Arturo Daag

Abstract:

The Philippine Institute of Volcanology and Seismology (PHIVOLCS) is enhancing its liquefaction hazard map towards a detailed probabilistic approach using SDS and geophysical data. Target sites for liquefaction assessment are public schools in Metro Manila. Since target sites are in highly urbanized-setting, the objective of the project is to conduct both non-destructive geotechnical studies using Screw Driving Testing (SDFS) combined with geophysical data such as refraction microtremor array (ReMi), 3 component microtremor Horizontal to Vertical Spectral Ratio (HVSR), and ground penetrating RADAR (GPR). Initial test data was conducted in liquefaction impacted areas from the Mw 6.1 earthquake in Central Luzon last April 22, 2019 Province of Pampanga. Numerous accounts of liquefaction events were documented areas underlain by quaternary alluvium and mostly covered by recent lahar deposits. SDS estimated values showed a good correlation to actual SPT values obtained from available borehole data. Thus, confirming that SDS can be an alternative tool for liquefaction assessment and more efficient in terms of cost and time compared to SPT and CPT. Conducting borehole may limit its access in highly urbanized areas. In order to extend or extrapolate the SPT borehole data, non-destructive geophysical equipment was used. A 3-component microtremor obtains a subsurface velocity model in 1-D seismic shear wave velocity of the upper 30 meters of the profile (Vs30). For the ReMi, 12 geophone array with 6 to 8-meter spacing surveys were conducted. Microtremor data were computed through the Factor of Safety, which is the quotient of Cyclic Resistance Ratio (CRR) and Cyclic Stress Ratio (CSR). Complementary GPR was used to study the subsurface structure and used to inferred subsurface structures and groundwater conditions.

Keywords: screw drive testing, microtremor, ground penetrating RADAR, liquefaction

Procedia PDF Downloads 185
22756 Volatility Index, Fear Sentiment and Cross-Section of Stock Returns: Indian Evidence

Authors: Pratap Chandra Pati, Prabina Rajib, Parama Barai

Abstract:

The traditional finance theory neglects the role of sentiment factor in asset pricing. However, the behavioral approach to asset-pricing based on noise trader model and limit to arbitrage includes investor sentiment as a priced risk factor in the assist pricing model. Investor sentiment affects stock more that are vulnerable to speculation, hard to value and risky to arbitrage. It includes small stocks, high volatility stocks, growth stocks, distressed stocks, young stocks and non-dividend-paying stocks. Since the introduction of Chicago Board Options Exchange (CBOE) volatility index (VIX) in 1993, it is used as a measure of future volatility in the stock market and also as a measure of investor sentiment. CBOE VIX index, in particular, is often referred to as the ‘investors’ fear gauge’ by public media and prior literature. The upward spikes in the volatility index are associated with bouts of market turmoil and uncertainty. High levels of the volatility index indicate fear, anxiety and pessimistic expectations of investors about the stock market. On the contrary, low levels of the volatility index reflect confident and optimistic attitude of investors. Based on the above discussions, we investigate whether market-wide fear levels measured volatility index is priced factor in the standard asset pricing model for the Indian stock market. First, we investigate the performance and validity of Fama and French three-factor model and Carhart four-factor model in the Indian stock market. Second, we explore whether India volatility index as a proxy for fearful market-based sentiment indicators affect the cross section of stock returns after controlling for well-established risk factors such as market excess return, size, book-to-market, and momentum. Asset pricing tests are performed using monthly data on CNX 500 index constituent stocks listed on the National stock exchange of India Limited (NSE) over the sample period that extends from January 2008 to March 2017. To examine whether India volatility index, as an indicator of fear sentiment, is a priced risk factor, changes in India VIX is included as an explanatory variable in the Fama-French three-factor model as well as Carhart four-factor model. For the empirical testing, we use three different sets of test portfolios used as the dependent variable in the in asset pricing regressions. The first portfolio set is the 4x4 sorts on the size and B/M ratio. The second portfolio set is the 4x4 sort on the size and sensitivity beta of change in IVIX. The third portfolio set is the 2x3x2 independent triple-sorting on size, B/M and sensitivity beta of change in IVIX. We find evidence that size, value and momentum factors continue to exist in Indian stock market. However, VIX index does not constitute a priced risk factor in the cross-section of returns. The inseparability of volatility and jump risk in the VIX is a possible explanation of the current findings in the study.

Keywords: India VIX, Fama-French model, Carhart four-factor model, asset pricing

Procedia PDF Downloads 235
22755 Association Rules Mining Task Using Metaheuristics: Review

Authors: Abir Derouiche, Abdesslem Layeb

Abstract:

Association Rule Mining (ARM) is one of the most popular data mining tasks and it is widely used in various areas. The search for association rules is an NP-complete problem that is why metaheuristics have been widely used to solve it. The present paper presents the ARM as an optimization problem and surveys the proposed approaches in the literature based on metaheuristics.

Keywords: Optimization, Metaheuristics, Data Mining, Association rules Mining

Procedia PDF Downloads 146
22754 Ubiquitous Life People Informatics Engine (U-Life PIE): Wearable Health Promotion System

Authors: Yi-Ping Lo, Shi-Yao Wei, Chih-Chun Ma

Abstract:

Since Google launched Google Glass in 2012, numbers of commercial wearable devices were released, such as smart belt, smart band, smart shoes, smart clothes ... etc. However, most of these devices perform as sensors to show the readings of measurements and few of them provide the interactive feedback to the user. Furthermore, these devices are single task devices which are not able to communicate with each other. In this paper a new health promotion system, Ubiquitous Life People Informatics Engine (U-Life PIE), will be presented. This engine consists of People Informatics Engine (PIE) and the interactive user interface. PIE collects all the data from the compatible devices, analyzes this data comprehensively and communicates between devices via various application programming interfaces. All the data and informations are stored on the PIE unit, therefore, the user is able to view the instant and historical data on their mobile devices any time. It also provides the real-time hands-free feedback and instructions through the user interface visually, acoustically and tactilely. These feedback and instructions suggest the user to adjust their posture or habits in order to avoid the physical injuries and prevent illness.

Keywords: machine learning, wearable devices, user interface, user experience, internet of things

Procedia PDF Downloads 271
22753 Study and Conservation of Cultural and Natural Heritages with the Use of Laser Scanner and Processing System for 3D Modeling Spatial Data

Authors: Julia Desiree Velastegui Caceres, Luis Alejandro Velastegui Caceres, Oswaldo Padilla, Eduardo Kirby, Francisco Guerrero, Theofilos Toulkeridis

Abstract:

It is fundamental to conserve sites of natural and cultural heritage with any available technique or existing methodology of preservation in order to sustain them for the following generations. We propose a further skill to protect the actual view of such sites, in which with high technology instrumentation we are able to digitally preserve natural and cultural heritages applied in Ecuador. In this project the use of laser technology is presented for three-dimensional models, with high accuracy in a relatively short period of time. In Ecuador so far, there are not any records on the use and processing of data obtained by this new technological trend. The importance of the project is the description of the methodology of the laser scanner system using the Faro Laser Scanner Focus 3D 120, the method for 3D modeling of geospatial data and the development of virtual environments in the areas of Cultural and Natural Heritage. In order to inform users this trend in technology in which three-dimensional models are generated, the use of such tools has been developed to be able to be displayed in all kinds of digitally formats. The results of the obtained 3D models allows to demonstrate that this technology is extremely useful in these areas, but also indicating that each data campaign needs an individual slightly different proceeding starting with the data capture and processing to obtain finally the chosen virtual environments.

Keywords: laser scanner system, 3D model, cultural heritage, natural heritage

Procedia PDF Downloads 291
22752 Proposing an Architecture for Drug Response Prediction by Integrating Multiomics Data and Utilizing Graph Transformers

Authors: Nishank Raisinghani

Abstract:

Efficiently predicting drug response remains a challenge in the realm of drug discovery. To address this issue, we propose four model architectures that combine graphical representation with varying positions of multiheaded self-attention mechanisms. By leveraging two types of multi-omics data, transcriptomics and genomics, we create a comprehensive representation of target cells and enable drug response prediction in precision medicine. A majority of our architectures utilize multiple transformer models, one with a graph attention mechanism and the other with a multiheaded self-attention mechanism, to generate latent representations of both drug and omics data, respectively. Our model architectures apply an attention mechanism to both drug and multiomics data, with the goal of procuring more comprehensive latent representations. The latent representations are then concatenated and input into a fully connected network to predict the IC-50 score, a measure of cell drug response. We experiment with all four of these architectures and extract results from all of them. Our study greatly contributes to the future of drug discovery and precision medicine by looking to optimize the time and accuracy of drug response prediction.

Keywords: drug discovery, transformers, graph neural networks, multiomics

Procedia PDF Downloads 129
22751 Masked Candlestick Model: A Pre-Trained Model for Trading Prediction

Authors: Ling Qi, Matloob Khushi, Josiah Poon

Abstract:

This paper introduces a pre-trained Masked Candlestick Model (MCM) for trading time-series data. The pre-trained model is based on three core designs. First, we convert trading price data at each data point as a set of normalized elements and produce embeddings of each element. Second, we generate a masked sequence of such embedded elements as inputs for self-supervised learning. Third, we use the encoder mechanism from the transformer to train the inputs. The masked model learns the contextual relations among the sequence of embedded elements, which can aid downstream classification tasks. To evaluate the performance of the pre-trained model, we fine-tune MCM for three different downstream classification tasks to predict future price trends. The fine-tuned models achieved better accuracy rates for all three tasks than the baseline models. To better analyze the effectiveness of MCM, we test the same architecture for three currency pairs, namely EUR/GBP, AUD/USD, and EUR/JPY. The experimentation results demonstrate MCM’s effectiveness on all three currency pairs and indicate the MCM’s capability for signal extraction from trading data.

Keywords: masked language model, transformer, time series prediction, trading prediction, embedding, transfer learning, self-supervised learning

Procedia PDF Downloads 111
22750 Implementing Service Learning in the Health Education Curriculum

Authors: Karen Butler

Abstract:

Johnson C. Smith University, one of the nation’s oldest Historically Black Colleges and Universities, has a strong history of service learning and community service. We first integrated service learning and peer education into health education courses in the spring of 2000. Students enrolled in the classes served as peer educators for the semester. Since then, the program has evolved and expanded but remains an integral part of several courses. The purpose of this session is to describe our program in terms of development, successes, and obstacles, and feedback received. A detailed description of the service learning component in HED 235: Drugs and Drug Education and HED 337: Environmental Health will be provided. These classes are required of our Community Health majors but are also popular electives for students in other disciplines. Three sources of student feedback were used to evaluate and continually modify the component: the SIR II course evaluation, service learning reflection papers, and focus group interviews. Student feedback has been largely positive. When criticism was given, it was thoughtful and constructive – given in the spirit of making it better for the next group. Students consistently agreed that the service learning program increased their awareness of pertinent health issues; that both the service providers and service recipients benefited from the project; and that the goals/issues targeted by the service learning component fit the objectives of the course. Also, evidence of curriculum and learning enhancement was found in the reflection papers and focus group sessions. Service learning sets up a win-win situation. It provides a way to respond to campus and community health needs while enhancing the curriculum, as students learn more by doing things that benefit the health and wellness of others. Service learning is suitable for any health education course and any target audience would welcome the effort.

Keywords: black colleges, community health, health education, service learning

Procedia PDF Downloads 330
22749 Design of Traffic Counting Android Application with Database Management System and Its Comparative Analysis with Traditional Counting Methods

Authors: Muhammad Nouman, Fahad Tiwana, Muhammad Irfan, Mohsin Tiwana

Abstract:

Traffic congestion has been increasing significantly in major metropolitan areas as a result of increased motorization, urbanization, population growth and changes in the urban density. Traffic congestion compromises efficiency of transport infrastructure and causes multiple traffic concerns; including but not limited to increase of travel time, safety hazards, air pollution, and fuel consumption. Traffic management has become a serious challenge for federal and provincial governments, as well as exasperated commuters. Effective, flexible, efficient and user-friendly traffic information/database management systems characterize traffic conditions by making use of traffic counts for storage, processing, and visualization. While, the emerging data collection technologies continue to proliferate, its accuracy can be guaranteed through the comparison of observed data with the manual handheld counters. This paper presents the design of tablet based manual traffic counting application and framework for development of traffic database management system for Pakistan. The database management system comprises of three components including traffic counting android application; establishing online database and its visualization using Google maps. Oracle relational database was chosen to develop the data structure whereas structured query language (SQL) was adopted to program the system architecture. The GIS application links the data from the database and projects it onto a dynamic map for traffic conditions visualization. The traffic counting device and example of a database application in the real-world problem provided a creative outlet to visualize the uses and advantages of a database management system in real time. Also, traffic data counts by means of handheld tablet/ mobile application can be used for transportation planning and forecasting.

Keywords: manual count, emerging data sources, traffic information quality, traffic surveillance, traffic counting device, android; data visualization, traffic management

Procedia PDF Downloads 178
22748 A Performance Study of Fixed, Single-Axis and Dual-Axis Photovoltaic Systems in Kuwait

Authors: A. Al-Rashidi, A. El-Hamalawi

Abstract:

In this paper, a performance study was conducted to investigate single and dual-axis PV systems to generate electricity in five different sites in Kuwait. Relevant data were obtained by using two sources for validation purposes. A commercial software, PVsyst, was used to analyse the data, such as metrological data and other input parameters, and compute the performance parameters such as capacity factor (CF) and final yield (YF). The results indicated that single and dual-axis PV systems would be very beneficial to electricity generation in Kuwait as an alternative source to conventional power plants, especially with the increased demand over time. The ranges were also found to be competitive in comparison to leading countries using similar systems. A significant increase in CF and YF values around 24% and 28.8% was achieved related to the use of single and dual systems, respectively.

Keywords: single-axis and dual-axis photovoltaic systems, capacity factor, final yield, Kuwait

Procedia PDF Downloads 283
22747 Evaluating the Accuracy of Biologically Relevant Variables Generated by ClimateAP

Authors: Jing Jiang, Wenhuan XU, Lei Zhang, Shiyi Zhang, Tongli Wang

Abstract:

Climate data quality significantly affects the reliability of ecological modeling. In the Asia Pacific (AP) region, low-quality climate data hinders ecological modeling. ClimateAP, a software developed in 2017, generates high-quality climate data for the AP region, benefiting researchers in forestry and agriculture. However, its adoption remains limited. This study aims to confirm the validity of biologically relevant variable data generated by ClimateAP during the normal climate period through comparison with the currently available gridded data. Climate data from 2,366 weather stations were used to evaluate the prediction accuracy of ClimateAP in comparison with the commonly used gridded data from WorldClim1.4. Univariate regressions were applied to 48 monthly biologically relevant variables, and the relationship between the observational data and the predictions made by ClimateAP and WorldClim was evaluated using Adjusted R-Squared and Root Mean Squared Error (RMSE). Locations were categorized into mountainous and flat landforms, considering elevation, slope, ruggedness, and Topographic Position Index. Univariate regressions were then applied to all biologically relevant variables for each landform category. Random Forest (RF) models were implemented for the climatic niche modeling of Cunninghamia lanceolata. A comparative analysis of the prediction accuracies of RF models constructed with distinct climate data sources was conducted to evaluate their relative effectiveness. Biologically relevant variables were obtained from three unpublished Chinese meteorological datasets. ClimateAPv3.0 and WorldClim predictions were obtained from weather station coordinates and WorldClim1.4 rasters, respectively, for the normal climate period of 1961-1990. Occurrence data for Cunninghamia lanceolata came from integrated biodiversity databases with 3,745 unique points. ClimateAP explains a minimum of 94.74%, 97.77%, 96.89%, and 94.40% of monthly maximum, minimum, average temperature, and precipitation variances, respectively. It outperforms WorldClim in 37 biologically relevant variables with lower RMSE values. ClimateAP achieves higher R-squared values for the 12 monthly minimum temperature variables and consistently higher Adjusted R-squared values across all landforms for precipitation. ClimateAP's temperature data yields lower Adjusted R-squared values than gridded data in high-elevation, rugged, and mountainous areas but achieves higher values in mid-slope drainages, plains, open slopes, and upper slopes. Using ClimateAP improves the prediction accuracy of tree occurrence from 77.90% to 82.77%. The biologically relevant climate data produced by ClimateAP is validated based on evaluations using observations from weather stations. The use of ClimateAP leads to an improvement in data quality, especially in non-mountainous regions. The results also suggest that using biologically relevant variables generated by ClimateAP can slightly enhance climatic niche modeling for tree species, offering a better understanding of tree species adaptation and resilience compared to using gridded data.

Keywords: climate data validation, data quality, Asia pacific climate, climatic niche modeling, random forest models, tree species

Procedia PDF Downloads 54
22746 Eye Tracking: Biometric Evaluations of Instructional Materials for Improved Learning

Authors: Janet Holland

Abstract:

Eye tracking is a great way to triangulate multiple data sources for deeper, more complete knowledge of how instructional materials are really being used and emotional connections made. Using sensor based biometrics provides a detailed local analysis in real time expanding our ability to collect science based data for a more comprehensive level of understanding, not previously possible, for teaching and learning. The knowledge gained will be used to make future improvements to instructional materials, tools, and interactions. The literature has been examined and a preliminary pilot test was implemented to develop a methodology for research in Instructional Design and Technology. Eye tracking now offers the addition of objective metrics obtained from eye tracking and other biometric data collection with analysis for a fresh perspective.

Keywords: area of interest, eye tracking, biometrics, fixation, fixation count, fixation sequence, fixation time, gaze points, heat map, saccades, time to first fixation

Procedia PDF Downloads 118
22745 A Proposed Mechanism for Skewing Symmetric Distributions

Authors: M. T. Alodat

Abstract:

In this paper, we propose a mechanism for skewing any symmetric distribution. The new distribution is called the deflation-inflation distribution (DID). We discuss some statistical properties of the DID such moments, stochastic representation, log-concavity. Also we fit the distribution to real data and we compare it to normal distribution and Azzlaini's skew normal distribution. Numerical results show that the DID fits the the tree ring data better than the other two distributions.

Keywords: normal distribution, moments, Fisher information, symmetric distributions

Procedia PDF Downloads 646
22744 Polarimetric Synthetic Aperture Radar Data Classification Using Support Vector Machine and Mahalanobis Distance

Authors: Najoua El Hajjaji El Idrissi, Necip Gokhan Kasapoglu

Abstract:

Polarimetric Synthetic Aperture Radar-based imaging is a powerful technique used for earth observation and classification of surfaces. Forest evolution has been one of the vital areas of attention for the remote sensing experts. The information about forest areas can be achieved by remote sensing, whether by using active radars or optical instruments. However, due to several weather constraints, such as cloud cover, limited information can be recovered using optical data and for that reason, Polarimetric Synthetic Aperture Radar (PolSAR) is used as a powerful tool for forestry inventory. In this [14paper, we applied support vector machine (SVM) and Mahalanobis distance to the fully polarimetric AIRSAR P, L, C-bands data from the Nezer forest areas, the classification is based in the separation of different tree ages. The classification results were evaluated and the results show that the SVM performs better than the Mahalanobis distance and SVM achieves approximately 75% accuracy. This result proves that SVM classification can be used as a useful method to evaluate fully polarimetric SAR data with sufficient value of accuracy.

Keywords: classification, synthetic aperture radar, SAR polarimetry, support vector machine, mahalanobis distance

Procedia PDF Downloads 118
22743 Short Life Cycle Time Series Forecasting

Authors: Shalaka Kadam, Dinesh Apte, Sagar Mainkar

Abstract:

The life cycle of products is becoming shorter and shorter due to increased competition in market, shorter product development time and increased product diversity. Short life cycles are normal in retail industry, style business, entertainment media, and telecom and semiconductor industry. The subject of accurate forecasting for demand of short lifecycle products is of special enthusiasm for many researchers and organizations. Due to short life cycle of products the amount of historical data that is available for forecasting is very minimal or even absent when new or modified products are launched in market. The companies dealing with such products want to increase the accuracy in demand forecasting so that they can utilize the full potential of the market at the same time do not oversupply. This provides the challenge to develop a forecasting model that can forecast accurately while handling large variations in data and consider the complex relationships between various parameters of data. Many statistical models have been proposed in literature for forecasting time series data. Traditional time series forecasting models do not work well for short life cycles due to lack of historical data. Also artificial neural networks (ANN) models are very time consuming to perform forecasting. We have studied the existing models that are used for forecasting and their limitations. This work proposes an effective and powerful forecasting approach for short life cycle time series forecasting. We have proposed an approach which takes into consideration different scenarios related to data availability for short lifecycle products. We then suggest a methodology which combines statistical analysis with structured judgement. Also the defined approach can be applied across domains. We then describe the method of creating a profile from analogous products. This profile can then be used for forecasting products with historical data of analogous products. We have designed an application which combines data, analytics and domain knowledge using point-and-click technology. The forecasting results generated are compared using MAPE, MSE and RMSE error scores. Conclusion: Based on the results it is observed that no one approach is sufficient for short life-cycle forecasting and we need to combine two or more approaches for achieving the desired accuracy.

Keywords: forecast, short life cycle product, structured judgement, time series

Procedia PDF Downloads 340
22742 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles

Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi

Abstract:

Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.

Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing

Procedia PDF Downloads 165
22741 Design and Implement a Remote Control Robot Controlled by Zigbee Wireless Network

Authors: Sinan Alsaadi, Mustafa Merdan

Abstract:

Communication and access systems can be made with many methods in today’s world. These systems are such standards as Wifi, Wimax, Bluetooth, GPS and GPRS. Devices which use these standards also use system resources excessively in direct proportion to their transmission speed. However, large-scale data communication is not always needed. In such cases, a technology which will use system resources as little as possible and support smart network topologies has been needed in order to enable the transmissions of such small packet data and provide the control for this kind of devices. IEEE issued 802.15.4 standard upon this necessity and enabled the production of Zigbee protocol which takes these standards as its basis and devices which support this protocol. In our project, this communication protocol was preferred. The aim of this study is to provide the immediate data transmission of our robot from the field within the scope of the project. In addition, making the communication with the robot through Zigbee Protocol has also been aimed. While sitting on the computer, obtaining the desired data from the region where the robot is located has been taken as the basis. Arduino Uno R3 microcontroller which provides the control mechanism, 1298 shield as the motor driver.

Keywords: ZigBee, wireless network, remote monitoring, smart home, agricultural industry

Procedia PDF Downloads 265
22740 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 201
22739 Tuning Cubic Equations of State for Supercritical Water Applications

Authors: Shyh Ming Chern

Abstract:

Cubic equations of state (EoS), popular due to their simple mathematical form, ease of use, semi-theoretical nature and, reasonable accuracy are normally fitted to vapor-liquid equilibrium P-v-T data. As a result, They often show poor accuracy in the region near and above the critical point. In this study, the performance of the renowned Peng-Robinson (PR) and Patel-Teja (PT) EoS’s around the critical area has been examined against the P-v-T data of water. Both of them display large deviations at critical point. For instance, PR-EoS exhibits discrepancies as high as 47% for the specific volume, 28% for the enthalpy departure and 43% for the entropy departure at critical point. It is shown that incorporating P-v-T data of the supercritical region into the retuning of a cubic EoS can improve its performance above the critical point dramatically. Adopting a retuned acentric factor of 0.5491 instead of its genuine value of 0.344 for water in PR-EoS and a new F of 0.8854 instead of its original value of 0.6898 for water in PT-EoS reduces the discrepancies to about one third or less.

Keywords: equation of state, EoS, supercritical water, SCW

Procedia PDF Downloads 514
22738 A Safety Analysis Method for Multi-Agent Systems

Authors: Ching Louis Liu, Edmund Kazmierczak, Tim Miller

Abstract:

Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.

Keywords: multi-agent system, safety analysis, safety model, integration map

Procedia PDF Downloads 402
22737 Mobile Genetic Elements in Trematode Himasthla Elongata Clonal Polymorphism

Authors: Anna Solovyeva, Ivan Levakin, Nickolai Galaktionov, Olga Podgornaya

Abstract:

Animals that reproduce asexually were thought to have the same genotypes within generations for a long time. However, some refuting examples were found, and mobile genetic elements (MGEs) or transposons are considered to be the most probable source of genetic instability. Dispersed nature and the ability to change their genomic localization enables MGEs to be efficient mutators. Hence the study of MGEs genomic impact requires an appropriate object which comprehends both representative amounts of various MGEs and options to evaluate the genomic influence of MGEs. Animals that reproduce asexually seem to be a decent model to study MGEs impact in genomic variability. We found a small marine trematode Himasthla elongata (Himasthlidae) to be a good model for such investigation as it has a small genome size, diverse MGEs and parthenogenetic stages in the lifecycle. In the current work, clonal diversity of cercaria was traced with an AFLP (Amplified fragment length polymorphism) method, diverse zones from electrophoretic patterns were cloned, and the nature of the fragments explored. Polymorphic patterns of individual cercariae AFLP-based fingerprints are enriched with retrotransposons of different families. The bulk of those sequences are represented by open reading frames of non-Long Terminal Repeats containing elements(non-LTR) yet Long-Terminal Repeats containing elements (LTR), to a lesser extent in variable figments of AFLP array. The CR1 elements expose both in polymorphic and conservative patterns are remarkably more frequent than the other non-LTR retrotransposons. This data was confirmed with shotgun sequencing-based on Illumina HiSeq 2500 platform. Individual cercaria of the same clone (i.e., originated from a single miracidium and inhabiting one host) has a various distribution of MGE families detected in sequenced AFLP patterns. The most numerous are CR1 and RTE-Bov retrotransposons, typical for trematode genomes. Also, we identified LTR-retrotransposons of Pao and Gypsy families among DNA transposons of CMC-EnSpm, Tc1/Mariner, MuLE-MuDR and Merlin families. We detected many of them in H. elongata transcriptome. Such uneven MGEs distribution in AFLP sequences’ sets reflects the different patterns of transposons spreading in cercarial genomes as transposons affect the genome in many ways (ectopic recombination, gene structure interruption, epigenetic silencing). It is considered that they play a key role in the origins of trematode clonal polymorphism. The authors greatly appreciate the help received at the Kartesh White Sea Biological Station of the Russian Academy of Sciences Zoological Institute. This work is funded with RSF 19-74-20102 and RFBR 17-04-02161 grants and the research program of the Zoological Institute of the Russian Academy of Sciences (project number AAAA-A19-119020690109-2).

Keywords: AFLP, clonal polymorphism, Himasthla elongata, mobile genetic elements, NGS

Procedia PDF Downloads 113
22736 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: cold-start learning, expectation propagation, multi-armed bandits, Thompson Sampling, variational inference

Procedia PDF Downloads 98
22735 A Quinary Coding and Matrix Structure Based Channel Hopping Algorithm for Blind Rendezvous in Cognitive Radio Networks

Authors: Qinglin Liu, Zhiyong Lin, Zongheng Wei, Jianfeng Wen, Congming Yi, Hai Liu

Abstract:

The multi-channel blind rendezvous problem in distributed cognitive radio networks (DCRNs) refers to how users in the network can hop to the same channel at the same time slot without any prior knowledge (i.e., each user is unaware of other users' information). The channel hopping (CH) technique is a typical solution to this blind rendezvous problem. In this paper, we propose a quinary coding and matrix structure-based CH algorithm called QCMS-CH. The QCMS-CH algorithm can guarantee the rendezvous of users using only one cognitive radio in the scenario of the asynchronous clock (i.e., arbitrary time drift between the users), heterogeneous channels (i.e., the available channel sets of users are distinct), and symmetric role (i.e., all users play a same role). The QCMS-CH algorithm first represents a randomly selected channel (denoted by R) as a fixed-length quaternary number. Then it encodes the quaternary number into a quinary bootstrapping sequence according to a carefully designed quaternary-quinary coding table with the prefix "R00". Finally, it builds a CH matrix column by column according to the bootstrapping sequence and six different types of elaborately generated subsequences. The user can access the CH matrix row by row and accordingly perform its channel, hoping to attempt rendezvous with other users. We prove the correctness of QCMS-CH and derive an upper bound on its Maximum Time-to-Rendezvous (MTTR). Simulation results show that the QCMS-CH algorithm outperforms the state-of-the-art in terms of the MTTR and the Expected Time-to-Rendezvous (ETTR).

Keywords: channel hopping, blind rendezvous, cognitive radio networks, quaternary-quinary coding

Procedia PDF Downloads 76
22734 A Discrete Element Method Centrifuge Model of Monopile under Cyclic Lateral Loads

Authors: Nuo Duan, Yi Pik Cheng

Abstract:

This paper presents the data of a series of two-dimensional Discrete Element Method (DEM) simulations of a large-diameter rigid monopile subjected to cyclic loading under a high gravitational force. At present, monopile foundations are widely used to support the tall and heavy wind turbines, which are also subjected to significant from wind and wave actions. A safe design must address issues such as rotations and changes in soil stiffness subject to these loadings conditions. Design guidance on the issue is limited, so are the availability of laboratory and field test data. The interpretation of these results in sand, such as the relation between loading and displacement, relies mainly on empirical correlations to pile properties. Regarding numerical models, most data from Finite Element Method (FEM) can be found. They are not comprehensive, and most of the FEM results are sensitive to input parameters. The micro scale behaviour could change the mechanism of the soil-structure interaction. A DEM model was used in this paper to study the cyclic lateral loads behaviour. A non-dimensional framework is presented and applied to interpret the simulation results. The DEM data compares well with various set of published experimental centrifuge model test data in terms of lateral deflection. The accumulated permanent pile lateral displacements induced by the cyclic lateral loads were found to be dependent on the characteristics of the applied cyclic load, such as the extent of the loading magnitudes and directions.

Keywords: cyclic loading, DEM, numerical modelling, sands

Procedia PDF Downloads 311
22733 Estimation of Desktop E-Wastes in Delhi Using Multivariate Flow Analysis

Authors: Sumay Bhojwani, Ashutosh Chandra, Mamita Devaburman, Akriti Bhogal

Abstract:

This article uses the Material flow analysis for estimating e-wastes in the Delhi/NCR region. The Material flow analysis is based on sales data obtained from various sources. Much of the data available for the sales is unreliable because of the existence of a huge informal sector. The informal sector in India accounts for more than 90%. Therefore, the scope of this study is only limited to the formal one. Also, for projection of the sales data till 2030, we have used regression (linear) to avoid complexity. The actual sales in the years following 2015 may vary non-linearly but we have assumed a basic linear relation. The purpose of this study was to know an approximate quantity of desktop e-wastes that we will have by the year 2030 so that we start preparing ourselves for the ineluctable investment in the treatment of these ever-rising e-wastes. The results of this study can be used to install a treatment plant for e-wastes in Delhi.

Keywords: e-wastes, Delhi, desktops, estimation

Procedia PDF Downloads 246