Search results for: statistical signal processing.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3591

Search results for: statistical signal processing.

321 Simulation of Static Frequency Converter for Synchronous Machine Operation and Investigation of Shaft Voltage

Authors: Arun Kumar Datta, M. A. Ansari, N. R. Mondal, B. V. Raghavaiah, Manisha Dubey, Shailendra Jain

Abstract:

This study is carried out to understand the effects of Static frequency converter (SFC) on large machine. SFC has a feature of four quadrant operations. By virtue of this it can be implemented to run a synchronous machine either as a motor or alternator. This dual mode operation helps a single machine to start & run as a motor and then it can be converted as an alternator whenever required. One such dual purpose machine is taken here for study. This machine is installed at a laboratory carrying out short circuit test on high power electrical equipment. SFC connected with this machine is broadly described in this paper. The same SFC has been modeled with the MATLAB/Simulink software. The data applied on this virtual model are the actual parameters from SFC and synchronous machine. After running the model, simulated machine voltage and current waveforms are validated with the real measurements. Processing of these waveforms is done through Fast Fourier Transformation (FFT) which reveals that the waveforms are not sinusoidal rather they contain number of harmonics. These harmonics are the major cause of generating shaft voltage. It is known that bearings of electrical machine are vulnerable to current flow through it due to shaft voltage. A general discussion on causes of shaft voltage in perspective with this machine is presented in this paper.

Keywords: Alternators, AC-DC power conversion, capacitive coupling, electric discharge machining, frequency converter, Fourier transforms, inductive coupling, simulation, Shaft voltage, synchronous machines, static excitation, thyristor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6044
320 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: Autonomous vehicles, deformable part model, dpm, pedestrian recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1386
319 The Impact of HIV/AIDS on Micro-enterprise Development in Kenya: A Study of Obunga Slum in Kisumu

Authors: C. A. Oloo, C. Ojwang

Abstract:

The performances of small and medium enterprises have stagnated in the last two decades. This has mainly been due to the emergence of HIV / Aids. The disease has had a detrimental effect on the general economy of the country leading to morbidity and mortality of the Kenyan workforce in their primary age. The present study sought to establish the economic impact of HIV / Aids on the micro-enterprise development in Obunga slum – Kisumu, in terms of production loss, increasing labor related cost and to establish possible strategies to address the impact of HIV / Aids on microenterprises. The study was necessitated by the observation that most micro-enterprises in the slum are facing severe economic and social crisis due to the impact of HIV / Aids, they get depleted and close down within a short time due to death of skilled and experience workforce. The study was carried out between June 2008 and June 2009 in Obunga slum. Data was subjected to computer aided statistical analysis that included descriptive statistic, chi-squared and ANOVA techniques. Chi-squared analysis on the micro-enterprise owners opinion on the impact of HIV / Aids on depletion of microenterprise compared to other diseases indicated high levels of the negative effects of the disease at significance levels of P<0.01. Analysis of variance on the impact of HIV / Aids on the performance and productivity of micro-enterprises also indicated a negative effect on the general performance of micro-enterprise at significance levels of P<0.01. Therefore reducing the negative impacts of HIV/Aids on micro-enterprise development, there is need to improve the socioeconomic environment, mobilize donors and stake holders in training and funding, and review the current strategies for addressing the disease. Further conclusive research should also be conducted on a bigger scale.

Keywords: Entrepreneurship, HIV-AIDS, Micro-enterprise, Poverty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2389
318 Evaluation and Analysis of Lean-Based Manufacturing Equipment and Technology System for Jordanian Industries

Authors: Mohammad D. AL-Tahat, Shahnaz M. Alkhalil

Abstract:

International markets driven forces are changing continuously, therefore companies need to gain a competitive edge in such markets. Improving the company's products, processes and practices is no longer auxiliary. Lean production is a production management philosophy that consolidates work tasks with minimum waste resulting in improved productivity. Lean production practices can be mapped into many production areas. One of these is Manufacturing Equipment and Technology (MET). Many lean production practices can be implemented in MET, namely, specific equipment configurations, total preventive maintenance, visual control, new equipment/ technologies, production process reengineering and shared vision of perfection.The purpose of this paper is to investigate the implementation level of these six practices in Jordanian industries. To achieve that a questionnaire survey has been designed according to five-point Likert scale. The questionnaire is validated through pilot study and through experts review. A sample of 350 Jordanian companies were surveyed, the response rate was 83%. The respondents were asked to rate the extent of implementation for each of practices. A relationship conceptual model is developed, hypotheses are proposed, and consequently the essential statistical analyses are then performed. An assessment tool that enables management to monitor the progress and the effectiveness of lean practices implementation is designed and presented. Consequently, the results show that the average implementation level of lean practices in MET is 77%, Jordanian companies are implementing successfully the considered lean production practices, and the presented model has Cronbach-s alpha value of 0.87 which is good evidence on model consistency and results validation.

Keywords: Lean Production, SME applications, Visual Control, New equipment/technologies, Specific equipment configurations, Jordan

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2288
317 Microencapsulation of Ascorbic Acid by Spray Drying: Influence of Process Conditions

Authors: Addion Nizori, Lan T.T. Bui, Darryl M. Small

Abstract:

Ascorbic acid (AA), commonly known as vitamin C, is essential for normal functioning of the body and maintenance of metabolic integrity. Among its various roles are as an antioxidant, a cofactor in collagen formation and other reactions, as well as reducing physical stress and maintenance of the immune system. Recent collaborative research between the Australian Defence Science and Technology Organisation (DSTO) in Scottsdale, Tasmania and RMIT University has sought to overcome the problems arising from the inherent instability of ascorbic acid during processing and storage of foods. The recent work has demonstrated the potential of microencapsulation by spray drying as a means to enhance retention. The purpose of this current study has been focused upon the influence of spray drying conditions on the properties of encapsulated ascorbic acid. The process was carried out according to a central composite design. Independent variables were: inlet temperature (80-120° C) and feed flow rate (7-14 mL/minute). Process yield, ascorbic acid loss, moisture content, water activity and particle size distribution were analysed as responses. The results have demonstrated the potential of microencapsulation by spray drying as a means to enhance retention. Vitamin retention, moisture content, water activity and process yield were influenced positively by inlet air temperature and negatively by feed flow rate.

Keywords: Microencapsulation, spray drying, ascorbic acid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4443
316 A Robust Method for Finding Nearest-Neighbor using Hexagon Cells

Authors: Ahmad Attiq Al-Ogaibi, Ahmad Sharieh, Moh’d Belal Al-Zoubi, R. Bremananth

Abstract:

In pattern clustering, nearest neighborhood point computation is a challenging issue for many applications in the area of research such as Remote Sensing, Computer Vision, Pattern Recognition and Statistical Imaging. Nearest neighborhood computation is an essential computation for providing sufficient classification among the volume of pixels (voxels) in order to localize the active-region-of-interests (AROI). Furthermore, it is needed to compute spatial metric relationships of diverse area of imaging based on the applications of pattern recognition. In this paper, we propose a new methodology for finding the nearest neighbor point, depending on making a virtually grid of a hexagon cells, then locate every point beneath them. An algorithm is suggested for minimizing the computation and increasing the turnaround time of the process. The nearest neighbor query points Φ are fetched by seeking fashion of hexagon holistic. Seeking will be repeated until an AROI Φ is to be expected. If any point Υ is located then searching starts in the nearest hexagons in a circular way. The First hexagon is considered be level 0 (L0) and the surrounded hexagons is level 1 (L1). If Υ is located in L1, then search starts in the next level (L2) to ensure that Υ is the nearest neighbor for Φ. Based on the result and experimental results, we found that the proposed method has an advantage over the traditional methods in terms of minimizing the time complexity required for searching the neighbors, in turn, efficiency of classification will be improved sufficiently.

Keywords: Hexagon cells, k-nearest neighbors, Nearest Neighbor, Pattern recognition, Query pattern, Virtually grid

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2789
315 Climate Change in Albania and Its Effect on Cereal Yield

Authors: L. Basha, E. Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine learning methods, such as Random Forest (RF), are used to predict cereal yield responses to climacteric and other variables. RF showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the RF method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods: multiple linear regression and lasso regression method.

Keywords: Cereal yield, climate change, machine learning, multiple regression model, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200
314 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function

Authors: Anupama Pande, Vishik Goel

Abstract:

A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2403
313 Estimating Saturated Hydraulic Conductivity from Soil Physical Properties using Neural Networks Model

Authors: B. Ghanbarian-Alavijeh, A.M. Liaghat, S. Sohrabi

Abstract:

Saturated hydraulic conductivity is one of the soil hydraulic properties which is widely used in environmental studies especially subsurface ground water. Since, its direct measurement is time consuming and therefore costly, indirect methods such as pedotransfer functions have been developed based on multiple linear regression equations and neural networks model in order to estimate saturated hydraulic conductivity from readily available soil properties e.g. sand, silt, and clay contents, bulk density, and organic matter. The objective of this study was to develop neural networks (NNs) model to estimate saturated hydraulic conductivity from available parameters such as sand and clay contents, bulk density, van Genuchten retention model parameters (i.e. r θ , α , and n) as well as effective porosity. We used two methods to calculate effective porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s θ is saturated water content, FC θ is water content retained at -33 kPa matric potential, and inf θ is water content at the inflection point. Total of 311 soil samples from the UNSODA database was divided into three groups as 187 for the training, 62 for the validation (to avoid over training), and 62 for the test of NNs model. A commercial neural network toolbox of MATLAB software with a multi-layer perceptron model and back propagation algorithm were used for the training procedure. The statistical parameters such as correlation coefficient (R2), and mean square error (MSE) were also used to evaluate the developed NNs model. The best number of neurons in the middle layer of NNs model for methods (1) and (2) were calculated 44 and 6, respectively. The R2 and MSE values of the test phase were determined for method (1), 0.94 and 0.0016, and for method (2), 0.98 and 0.00065, respectively, which shows that method (2) estimates saturated hydraulic conductivity better than method (1).

Keywords: Neural network, Saturated hydraulic conductivity, Soil physical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2550
312 Lower energy Gait Pattern Generation in 5-Link Biped Robot Using Image Processing

Authors: Byounghyun Kim, Youngjoon Han, Hernsoo Hahn

Abstract:

The purpose of this study is to find natural gait of biped robot such as human being by analyzing the COG (Center Of Gravity) trajectory of human being's gait. It is discovered that human beings gait naturally maintain the stability and use the minimum energy. This paper intends to find the natural gait pattern of biped robot using the minimum energy as well as maintaining the stability by analyzing the human's gait pattern that is measured from gait image on the sagittal plane and COG trajectory on the frontal plane. It is not possible to apply the torques of human's articulation to those of biped robot's because they have different degrees of freedom. Nonetheless, human and 5-link biped robots are similar in kinematics. For this, we generate gait pattern of the 5-link biped robot by using the GA algorithm of adaptation gait pattern which utilize the human's ZMP (Zero Moment Point) and torque of all articulation that are measured from human's gait pattern. The algorithm proposed creates biped robot's fluent gait pattern as that of human being's and to minimize energy consumption because the gait pattern of the 5-link biped robot model is modeled after consideration about the torque of human's each articulation on the sagittal plane and ZMP trajectory on the frontal plane. This paper demonstrate that the algorithm proposed is superior by evaluating 2 kinds of the 5-link biped robot applied to each gait patterns generated both in the general way using inverse kinematics and in the special way in which by considering visuality and efficiency.

Keywords: 5-link biped robot, gait pattern, COG (Center OfGravity), ZMP (Zero Moment Point).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1886
311 Dengue Disease Mapping with Standardized Morbidity Ratio and Poisson-gamma Model: An Analysis of Dengue Disease in Perak, Malaysia

Authors: N. A. Samat, S. H. Mohd Imam Ma’arof

Abstract:

Dengue disease is an infectious vector-borne viral disease that is commonly found in tropical and sub-tropical regions, especially in urban and semi-urban areas, around the world and including Malaysia. There is no currently available vaccine or chemotherapy for the prevention or treatment of dengue disease. Therefore prevention and treatment of the disease depend on vector surveillance and control measures. Disease risk mapping has been recognized as an important tool in the prevention and control strategies for diseases. The choice of statistical model used for relative risk estimation is important as a good model will subsequently produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for dengue disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and one of the earliest applications of Bayesian methodology called Poisson-gamma model. This paper begins by providing a review of the SMR method, which we then apply to dengue data of Perak, Malaysia. We then fit an extension of the SMR method, which is the Poisson-gamma model. Both results are displayed and compared using graph, tables and maps. Results of the analysis shows that the latter method gives a better relative risk estimates compared with using the SMR. The Poisson-gamma model has been demonstrated can overcome the problem of SMR when there is no observed dengue cases in certain regions. However, covariate adjustment in this model is difficult and there is no possibility for allowing spatial correlation between risks in adjacent areas. The drawbacks of this model have motivated many researchers to propose other alternative methods for estimating the risk.

Keywords: Dengue disease, Disease mapping, Standardized Morbidity Ratio, Poisson-gamma model, Relative risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3284
310 Semantic Enhanced Social Media Sentiments for Stock Market Prediction

Authors: K. Nirmala Devi, V. Murali Bhaskaran

Abstract:

Traditional document representation for classification follows Bag of Words (BoW) approach to represent the term weights. The conventional method uses the Vector Space Model (VSM) to exploit the statistical information of terms in the documents and they fail to address the semantic information as well as order of the terms present in the documents. Although, the phrase based approach follows the order of the terms present in the documents rather than semantics behind the word. Therefore, a semantic concept based approach is used in this paper for enhancing the semantics by incorporating the ontology information. In this paper a novel method is proposed to forecast the intraday stock market price directional movement based on the sentiments from Twitter and money control news articles. The stock market forecasting is a very difficult and highly complicated task because it is affected by many factors such as economic conditions, political events and investor’s sentiment etc. The stock market series are generally dynamic, nonparametric, noisy and chaotic by nature. The sentiment analysis along with wisdom of crowds can automatically compute the collective intelligence of future performance in many areas like stock market, box office sales and election outcomes. The proposed method utilizes collective sentiments for stock market to predict the stock price directional movements. The collective sentiments in the above social media have powerful prediction on the stock price directional movements as up/down by using Granger Causality test.

Keywords: Bag of Words, Collective Sentiments, Ontology, Semantic relations, Sentiments, Social media, Stock Prediction, Twitter, Vector Space Model and wisdom of crowds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2793
309 Statistical Analysis and Optimization of a Process for CO2 Capture

Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi

Abstract:

CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.

Keywords: Bubble column reactor, CO2 capture, Response Surface Methodology, water desalination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1836
308 Utilizing Analytic Hierarchy Process to Analyze Consumers- Purchase Evaluation Factors of Smartphones

Authors: Yi-Chung Hu, Yu-Lin Liao

Abstract:

Due to the fast development of technology, the competition of technological products is turbulent; therefore, it is important to understand the market trend, consumers- demand and preferences. As the smartphones are prevalent, the main purpose of this paper is to utilize Analytic Hierarchy Process (AHP) to analyze consumer-s purchase evaluation factors of smartphones. Through the AHP expert questionnaire, the smartphones- main functions are classified as “user interface", “mobile commerce functions", “hardware and software specifications", “entertainment functions" and “appearance and design", five aspects to analyze the weights. Then four evaluation criteria are evaluated under each aspect to rank the weights. Based on an analysis of data shows that consumers consider when purchase factors are “hardware and software specifications", “user interface", “appearance and design", “mobile commerce functions" and “entertainment functions" in sequence. The “hardware and software specifications" aspect obtains the weight of 33.18%; it is the most important factor that consumers are taken into account. In addition, the most important evaluation criteria are central processing unit, operating system, touch screen, and battery function in sequence. The results of the study can be adopted as reference data for mobile phone manufacturers in the future on the design and marketing strategy to satisfy the voice of customer.

Keywords: Analytic Hierarchy Process (AHP), evaluation criteria, purchase evaluation factors, smartphone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3229
307 Director Compensation, CEO Duality, State Ownership, and Firm Performance in China: Proof from Panel Data of Publicly Listed Enterprises from 1999 to 2020

Authors: Wanda Luen-Wun Siu, Xiaowen Zhang

Abstract:

This paper offered the primary methodical proof on how director remuneration related to enterprise earnings in listed firms in China in light of most evidence focusing on cross-sectional data or data in a short span of time. Using full economic and business panel data on China’s publicly listed enterprise from 1999 to 2020 over two decades in the China Stock Market & Accounting Research database, we found statistically significant positive associations between director pay and firm performance in privately owned firms over this period, supporting the agency theory. In contrast, among the state-owned enterprises, there was a reverse relation between director compensation and firm financial performance, contributing to the existing literature. But the results also revealed that state-owned enterprises financially performed as well as private enterprises. Such findings suggested that state ownership might line up officials’ career incentives with party prime concern rather than pecuniary incentives. Also, CEO duality enhanced firm performance. As such, allegiance to the party and possible advancement to an upper-level political position would motivate company directors in state-owned enterprises. On the other hand, directors in privately owned enterprises might be motivated by monetary incentives. In addition, a statistical regression model was proposed and tested to get the results of the performance of state-owned enterprises. Finally, some suggestions were made about how to improve the institutional management of government-owned corporations in China.

Keywords: China’s listed Firm, director compensation, CEO duality, firm performance, panel analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 473
306 Mercerization Treatment Parameter Effect on Natural Fiber Reinforced Polymer Matrix Composite: A Brief Review

Authors: Mohd Yussni Hashim, Mohd Nazrul Roslan, Azriszul Mohd Amin, Ahmad Mujahid Ahmad Zaidi, Saparudin Ariffin

Abstract:

Environmental awareness and depletion of the petroleum resources are among vital factors that motivate a number of researchers to explore the potential of reusing natural fiber as an alternative composite material in industries such as packaging, automotive and building constructions. Natural fibers are available in abundance, low cost, lightweight polymer composite and most importance its biodegradability features, which often called “ecofriendly" materials. However, their applications are still limited due to several factors like moisture absorption, poor wettability and large scattering in mechanical properties. Among the main challenges on natural fibers reinforced matrices composite is their inclination to entangle and form fibers agglomerates during processing due to fiber-fiber interaction. This tends to prevent better dispersion of the fibers into the matrix, resulting in poor interfacial adhesion between the hydrophobic matrix and the hydrophilic reinforced natural fiber. Therefore, to overcome this challenge, fiber treatment process is one common alternative that can be use to modify the fiber surface topology by chemically, physically or mechanically technique. Nevertheless, this paper attempt to focus on the effect of mercerization treatment on mechanical properties enhancement of natural fiber reinforced composite or so-called bio composite. It specifically discussed on mercerization parameters, and natural fiber reinforced composite mechanical properties enhancement.

Keywords: Mercerization treatment, mechanical properties, natural fiber and bio composite

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4737
305 A BERT-Based Model for Financial Social Media Sentiment Analysis

Authors: Josiel Delgadillo, Johnson Kinyua, Charles Mutigwe

Abstract:

The purpose of sentiment analysis is to determine the sentiment strength (e.g., positive, negative, neutral) from a textual source for good decision-making. Natural Language Processing (NLP) in domains such as financial markets requires knowledge of domain ontology, and pre-trained language models, such as BERT, have made significant breakthroughs in various NLP tasks by training on large-scale un-labeled generic corpora such as Wikipedia. However, sentiment analysis is a strong domain-dependent task. The rapid growth of social media has given users a platform to share their experiences and views about products, services, and processes, including financial markets. StockTwits and Twitter are social networks that allow the public to express their sentiments in real time. Hence, leveraging the success of unsupervised pre-training and a large amount of financial text available on social media platforms could potentially benefit a wide range of financial applications. This work is focused on sentiment analysis using social media text on platforms such as StockTwits and Twitter. To meet this need, SkyBERT, a domain-specific language model pre-trained and fine-tuned on financial corpora, has been developed. The results show that SkyBERT outperforms current state-of-the-art models in financial sentiment analysis. Extensive experimental results demonstrate the effectiveness and robustness of SkyBERT.

Keywords: BERT, financial markets, Twitter, sentiment analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 691
304 Cooperative Learning: A Case Study on Teamwork through Community Service Project

Authors: Priyadharshini Ahrumugam

Abstract:

Cooperative groups through much research have been recognized to churn remarkable achievements instead of solitary or individualistic efforts. Based on Johnson and Johnson’s model of cooperative learning, the five key components of cooperation are positive interdependence, face-to-face promotive interaction, individual accountability, social skills, and group processing. In 2011, the Malaysian Ministry of Higher Education (MOHE) introduced the Holistic Student Development policy with the aim to develop morally sound individuals equipped with lifelong learning skills. The Community Service project was included in the improvement initiative. The purpose of this study is to assess the relationship of team-based learning in facilitating particularly students’ positive interdependence and face-to-face promotive interaction. The research methods involve in-depth interviews with the team leaders and selected team members, and a content analysis of the undergraduate students’ reflective journals. A significant positive relationship was found between students’ progressive outlook towards teamwork and the highlighted two components. The key findings show that students have gained in their individual learning and work results through teamwork and interaction with other students. The inclusion of Community Service as a MOHE subject resonates with cooperative learning methods that enhances supportive relationships and develops students’ social skills together with their professional skills.

Keywords: Community service, cooperative learning, positive interdependence, teamwork.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2195
303 Altered Network Organization in Mild Alzheimer's Disease Compared to Mild Cognitive Impairment Using Resting-State EEG

Authors: Chia-Feng Lu, Yuh-Jen Wang, Shin Teng, Yu-Te Wu, Sui-Hing Yan

Abstract:

Brain functional networks based on resting-state EEG data were compared between patients with mild Alzheimer’s disease (mAD) and matched patients with amnestic subtype of mild cognitive impairment (aMCI). We integrated the time–frequency cross mutual information (TFCMI) method to estimate the EEG functional connectivity between cortical regions and the network analysis based on graph theory to further investigate the alterations of functional networks in mAD compared with aMCI group. We aimed at investigating the changes of network integrity, local clustering, information processing efficiency, and fault tolerance in mAD brain networks for different frequency bands based on several topological properties, including degree, strength, clustering coefficient, shortest path length, and efficiency. Results showed that the disruptions of network integrity and reductions of network efficiency in mAD characterized by lower degree, decreased clustering coefficient, higher shortest path length, and reduced global and local efficiencies in the delta, theta, beta2, and gamma bands were evident. The significant changes in network organization can be used in assisting discrimination of mAD from aMCI in clinical.

Keywords: EEG, functional connectivity, graph theory, TFCMI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2495
302 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 819
301 An Efficient Approach for Shear Behavior Definition of Plant Stalk

Authors: M. R. Kamandar, J. Massah

Abstract:

The information of the impact cutting behavior of plants stalk plays an important role in the design and fabrication of plants cutting equipment. It is difficult to investigate a theoretical method for defining cutting properties of plants stalks because the cutting process is complex. Thus, it is necessary to set up an experimental approach to determine cutting parameters for a single stalk. To measure the shear force, shear energy and shear strength of plant stalk, a special impact cutting tester was fabricated. It was similar to an Izod impact cutting tester for metals but a cutting blade and data acquisition system were attached to the end of pendulum's arm. The apparatus was included four strain gages and a digital indicator to show the real-time cutting force of plant stalk. To measure the shear force and also testing the apparatus, two plants’ stalks, like buxus and privet, were selected. The samples (buxus and privet stalks) were cut under impact cutting process at four loading rates 1, 2, 3 and 4 m.s-1 and three internodes fifth, tenth and fifteenth by the apparatus. At buxus cutting analysis: the minimum value of cutting energy was obtained at fifth internode and loading rate 4 m.s-1 and the maximum value of shear energy was obtained at fifteenth internode and loading rate 1 m.s-1. At privet cutting analysis: the minimum value of shear consumption energy was obtained at fifth internode and loading rate: 4 m.s-1 and the maximum value of shear energy was obtained at fifteenth internode and loading rate: 1 m.s-1. The statistical analysis at both plants showed that the increase of impact cutting speed would decrease the shear consumption energy and shear strength. In two scenarios, the results showed that with increase the cutting speed, shear force would decrease.

Keywords: Buxus, privet, impact cutting, shear energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 822
300 Detection of Temporal Change of Fishery and Island Activities by DNB and SAR on the South China Sea

Authors: I. Asanuma, T. Yamaguchi, J. Park, K. J. Mackin

Abstract:

Fishery lights on the surface could be detected by the Day and Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi-NPP). The DNB covers the spectral range of 500 to 900 nm and realized a higher sensitivity. The DNB has a difficulty of identification of fishing lights from lunar lights reflected by clouds, which affects observations for the half of the month. Fishery lights and lights of the surface are identified from lunar lights reflected by clouds by a method using the DNB and the infrared band, where the detection limits are defined as a function of the brightness temperature with a difference from the maximum temperature for each level of DNB radiance and with the contrast of DNB radiance against the background radiance. Fishery boats or structures on islands could be detected by the Synthetic Aperture Radar (SAR) on the polar orbit satellites using the reflected microwave by the surface reflecting targets. The SAR has a difficulty of tradeoff between spatial resolution and coverage while detecting the small targets like fishery boats. A distribution of fishery boats and island activities were detected by the scan-SAR narrow mode of Radarsat-2, which covers 300 km by 300 km with various combinations of polarizations. The fishing boats were detected as a single pixel of highly scattering targets with the scan-SAR narrow mode of which spatial resolution is 30 m. As the look angle dependent scattering signals exhibits the significant differences, the standard deviations of scattered signals for each look angles were taken into account as a threshold to identify the signal from fishing boats and structures on the island from background noise. It was difficult to validate the detected targets by DNB with SAR data because of time lag of observations for 6 hours between midnight by DNB and morning or evening by SAR. The temporal changes of island activities were detected as a change of mean intensity of DNB for circular area for a certain scale of activities. The increase of DNB mean intensity was corresponding to the beginning of dredging and the change of intensity indicated the ending of reclamation and following constructions of facilities.

Keywords: Day night band, fishery, SAR, South China Sea.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1084
299 Evaluating the Validity of Computational Fluid Dynamics Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements

Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck

Abstract:

This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the mean geometric bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.

Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 349
298 Characteristics of Wall Thickness Increase in Pipe Reduction Process using Planetary Rolls

Authors: Yuji Kotani, Shunsuke Kanai, Hisaki Watari

Abstract:

In recent years, global warming has become a worldwide problem. The reduction of carbon dioxide emissions is a top priority for many companies in the manufacturing industry. In the automobile industry as well, the reduction of carbon dioxide emissions is one of the most important issues. Technology to reduce the weight of automotive parts improves the fuel economy of automobiles, and is an important technology for reducing carbon dioxide. Also, even if this weight reduction technology is applied to electric automobiles rather than gasoline automobiles, reducing energy consumption remains an important issue. Plastic processing of hollow pipes is one important technology for realizing the weight reduction of automotive parts. Ohashi et al. [1],[2] present an example of research on pipe formation in which a process was carried out to enlarge a pipe diameter using a lost core, achieving the suppression of wall thickness reduction and greater pipe expansion than hydroforming. In this study, we investigated a method to increase the wall thickness of a pipe through pipe compression using planetary rolls. The establishment of a technology whereby the wall thickness of a pipe can be controlled without buckling the pipe is an important technology for the weight reduction of products. Using the finite element analysis method, we predicted that it would be possible to increase the compression of an aluminum pipe with a 3mm wall thickness by approximately 20%, and wall thickness by approximately 20% by pressing the hollow pipe with planetary rolls.

Keywords: Pipe-Forming, Wall Thickness, Finite-element-method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2979
297 Complex Wavelet Transform Based Image Denoising and Zooming Under the LMMSE Framework

Authors: T. P. Athira, Gibin Chacko George

Abstract:

This paper proposes a dual tree complex wavelet transform (DT-CWT) based directional interpolation scheme for noisy images. The problems of denoising and interpolation are modelled as to estimate the noiseless and missing samples under the same framework of optimal estimation. Initially, DT-CWT is used to decompose an input low-resolution noisy image into low and high frequency subbands. The high-frequency subband images are interpolated by linear minimum mean square estimation (LMMSE) based interpolation, which preserves the edges of the interpolated images. For each noisy LR image sample, we compute multiple estimates of it along different directions and then fuse those directional estimates for a more accurate denoised LR image. The estimation parameters calculated in the denoising processing can be readily used to interpolate the missing samples. The inverse DT-CWT is applied on the denoised input and interpolated high frequency subband images to obtain the high resolution image. Compared with the conventional schemes that perform denoising and interpolation in tandem, the proposed DT-CWT based noisy image interpolation method can reduce many noise-caused interpolation artifacts and preserve well the image edge structures. The visual and quantitative results show that the proposed technique outperforms many of the existing denoising and interpolation methods.

Keywords: Dual-tree complex wavelet transform (DT-CWT), denoising, interpolation, optimal estimation, super resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2155
296 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.

Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143
295 Effects of Fermentation Techniques on the Quality of Cocoa Beans

Authors: Monday O. Ale, Adebukola A. Akintade, Olasunbo O. Orungbemi

Abstract:

Fermentation as an important operation in the processing of cocoa beans is now affected by the recent climate change across the globe. The major requirement for effective fermentation is the ability of the material used to retain sufficient heat for the required microbial activities. Apart from the effects of climate on the rate of heat retention, the materials used for fermentation plays an important role. Most Farmers still restrict fermentation activities to the use of traditional methods. Improving on cocoa fermentation in this era of climate change makes it necessary to work on other materials that can be suitable for cocoa fermentation. Therefore, the objective of this study was to determine the effects of fermentation techniques on the quality of cocoa beans. The materials used in this fermentation research were heap-leaves (traditional), stainless steel, plastic tin, plastic basket and wooden box. The period of fermentation varies from zero days to 10 days. Physical and chemical tests were carried out for variables in quality determination in the samples. The weight per bean varied from 1.0-1.2 g after drying across the samples and the major color of the dry beans observed was brown except with the samples from stainless steel. The moisture content varied from 5.5-7%. The mineral content and the heavy metals decreased with increase in the fermentation period. A wooden box can conclusively be used as an alternative to heap-leaves as there was no significant difference in the physical features of the samples fermented with the two methods. The use of a wooden box as an alternative for cocoa fermentation is therefore recommended for cocoa farmers.

Keywords: Effects, fermentation, fermentation materials, period, quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1011
294 Clinical Parameters Response to Low-Level Laser versus Monochromatic Near-Infrared Photo Energy in Diabetic Patients with Peripheral Neuropathy

Authors: Abeer A. Abdelhamed

Abstract:

Background: Diabetic sensorimotor polyneuropathy (DSP) is one of the most common microvascular complications of type 2 diabetes. Loss of sensation is thought to contribute to a lack of static and dynamic stability and increased risk of falling. Purpose: The purpose of this study was to compare the effects of low-level laser (LLL) and monochromatic near-infrared photo energy (MIRE) on pain, cutaneous sensation, static stability, and index of lower limb blood flow in diabetic patients with peripheral neuropathy. Methods: Forty diabetic patients with peripheral neuropathy were recruited for participation in this study. They were divided into two groups: The MIRE group, which contained 20 patients, and the LLL group, which contained 20 patients. All patients who participated in the study had been subjected to various physical assessment procedures, including pain, cutaneous sensation, Doppler flow meter, and static stability assessments. The baseline measurements were followed by treatment sessions that were conducted twice a week for six successive weeks. Results: The statistical analysis of the data revealed significant improvement of pain in both groups, with significant improvement in cutaneous sensation and static balance in the MIRE group compared to the LLL group; on the other hand, the results showed no significant differences in lower limb blood flow between the groups. Conclusion: LLL and MIRE can improve painful symptoms in patients with diabetic neuropathy. On the other hand, MIRE is also useful in improving cutaneous sensation and static stability in patients with diabetic neuropathy.

Keywords: Diabetic neuropathy, Doppler flow meter, –Lowlevel laser, Monochromatic near-infrared photo energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
293 Digital Content Strategy: Detailed Review of the Key Content Components

Authors: Oksana Razina, Shakeel Ahmad, Jessie Qun Ren, Olufemi Isiaq

Abstract:

The modern life of businesses is categorically reliant on their established position online, where digital (and particularly website) content plays a significant role as the first point of information. Digital content, therefore, becomes essential – from making the first impression through to the building and development of client relationships. Despite a number of valuable papers suggesting a strategic approach when dealing with digital data, other sources often do not view or accept the approach to digital content as a holistic or continuous process. Associations are frequently made with merely a one-off marketing campaign or similar. The challenge is in establishing an agreed definition for the notion of Digital Content Strategy (DCS), which currently does not exist, as it is viewed from an excessive number of angles. A strategic approach to content, nonetheless, is required, both practically and contextually. We, therefore, aimed at attempting to identify the key content components, comprising a DCS, to ensure all the aspects were covered and strategically applied – from the company’s understanding of the content value to the ability to display flexibility of content and advances in technology. This conceptual project evaluated existing literature on the topic of DCS and related aspects, using PRISMA Systematic Review Method, Document Analysis, Inclusion and Exclusion Criteria, Scoping Review, Snow-Balling Technique and Thematic Analysis. The data were collected from academic and statistical sources, government and relevant trade publications. Based on the suggestions from academics and trading sources, related to the issues discussed, we revealed the key actions for content creation and attempted to define the notion of DCS. The major finding of the study presented Key Content Components of DCS and can be considered for implementation in a business retail setting.

Keywords: Digital content strategy, digital marketing strategy, key content components, websites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200
292 Digital Content Strategy: Detailed Review of the Key Content Components

Authors: Oksana Razina, Shakeel Ahmad, Jessie Qun Ren, Olufemi Isiaq

Abstract:

The modern life of businesses is categorically reliant on their established position online, where digital (and particularly website) content plays a significant role as the first point of information. Digital content, therefore, becomes essential – from making the first impression through to the building and development of client relationships. Despite a number of valuable papers suggesting a strategic approach when dealing with digital data, other sources often do not view or accept the approach to digital content as a holistic or continuous process. Associations are frequently made with merely a one-off marketing campaign or similar. The challenge is in establishing an agreed definition for the notion of Digital Content Strategy (DCS), which currently does not exist, as it is viewed from an excessive number of angles. A strategic approach to content, nonetheless, is required, both practically and contextually. We, therefore, aimed at attempting to identify the key content components, comprising a DCS, to ensure all the aspects were covered and strategically applied – from the company’s understanding of the content value to the ability to display flexibility of content and advances in technology. This conceptual project evaluated existing literature on the topic of DCS and related aspects, using PRISMA Systematic Review Method, Document Analysis, Inclusion and Exclusion Criteria, Scoping Review, Snow-Balling Technique and Thematic Analysis. The data were collected from academic and statistical sources, government and relevant trade publications. Based on the suggestions from academics and trading sources, related to the issues discussed, we revealed the key actions for content creation and attempted to define the notion of DCS. The major finding of the study presented Key Content Components of DCS and can be considered for implementation in a business retail setting.

Keywords: Digital content strategy, digital marketing strategy, key content components, websites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181