Search results for: network model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19929

Search results for: network model

17559 An Intelligent WSN-Based Parking Guidance System

Authors: Sheng-Shih Wang, Wei-Ting Wang

Abstract:

This paper designs an intelligent guidance system, based on wireless sensor networks, for efficient parking in parking lots. The proposed system consists of a parking space allocation subsystem, a parking space monitoring subsystem, a driving guidance subsystem, and a vehicle detection subsystem. In the system, we propose a novel and effective virtual coordinate system for sensing and displaying devices to determine the proper vacant parking space and provide the precise guidance to the driver. This study constructs a ZigBee-based wireless sensor network on Arduino platform and implements the prototype of the proposed system using Arduino-based complements. Experimental results confirm that the proposed prototype can not only work well, but also provide drivers the correct parking information.

Keywords: Arduino, parking guidance, wireless sensor network, ZigBee

Procedia PDF Downloads 574
17558 Prioritization of Mutation Test Generation with Centrality Measure

Authors: Supachai Supmak, Yachai Limpiyakorn

Abstract:

Mutation testing can be applied for the quality assessment of test cases. Prioritization of mutation test generation has been a critical element of the industry practice that would contribute to the evaluation of test cases. The industry generally delivers the product under the condition of time to the market and thus, inevitably sacrifices software testing tasks, even though many test cases are required for software verification. This paper presents an approach of applying a social network centrality measure, PageRank, to prioritize mutation test generation. The source code with the highest values of PageRank will be focused first when developing their test cases as these modules are vulnerable to defects or anomalies which may cause the consequent defects in many other associated modules. Moreover, the approach would help identify the reducible test cases in the test suite, still maintaining the same criteria as the original number of test cases.

Keywords: software testing, mutation test, network centrality measure, test case prioritization

Procedia PDF Downloads 111
17557 Terrain Classification for Ground Robots Based on Acoustic Features

Authors: Bernd Kiefer, Abraham Gebru Tesfay, Dietrich Klakow

Abstract:

The motivation of our work is to detect different terrain types traversed by a robot based on acoustic data from the robot-terrain interaction. Different acoustic features and classifiers were investigated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. We achieve precision and recall values between 87% and 100% per class, and an average accuracy at 95.2%. We also study the effect of varying audio chunk size in the application phase of the models and find only a mild impact on performance.

Keywords: acoustic features, autonomous robots, feature extraction, terrain classification

Procedia PDF Downloads 366
17556 Modeling of a Pendulum Test Including Skin and Muscles under Compression

Authors: M. J. Kang, Y. N. Jo, H. H. Yoo

Abstract:

Pendulum tests were used to identify a stretch reflex and diagnose spasticity. Some researches tried to make a mathematical model to simulate the motions. Thighs are subject to compressive forces due to gravity during a pendulum test. Therefore, it affects knee trajectories. However, the most studies on the pendulum tests did not consider that conditions. We used Kelvin-Voight model as compression model of skin and muscles. In this study, we investigated viscoelastic behaviors of skin and muscles using gelatin blocks from experiments of the vibration of the compliantly supported beam. Then we calculated a dynamic stiffness and loss factors from the experiment and estimated a damping coefficient of the model. We also did pendulum tests of human lower limbs to validate the stiffness and damping coefficient of a skin model. To simulate the pendulum motion, we derive equations of motion. We used stretch reflex activation model to estimate muscle forces induced by the stretch reflex. To validate the results, we compared the activation with electromyography signals during experiments. The compression behavior of skin and muscles in this study can be applied to analyze sitting posture as wee as developing surgical techniques.

Keywords: Kelvin-Voight model, pendulum test, skin and muscles under compression, stretch reflex

Procedia PDF Downloads 444
17555 Census and Mapping of Oil Palms Over Satellite Dataset Using Deep Learning Model

Authors: Gholba Niranjan Dilip, Anil Kumar

Abstract:

Conduct of accurate reliable mapping of oil palm plantations and census of individual palm trees is a huge challenge. This study addresses this challenge and developed an optimized solution implemented deep learning techniques on remote sensing data. The oil palm is a very important tropical crop. To improve its productivity and land management, it is imperative to have accurate census over large areas. Since, manual census is costly and prone to approximations, a methodology for automated census using panchromatic images from Cartosat-2, SkySat and World View-3 satellites is demonstrated. It is selected two different study sites in Indonesia. The customized set of training data and ground-truth data are created for this study from Cartosat-2 images. The pre-trained model of Single Shot MultiBox Detector (SSD) Lite MobileNet V2 Convolutional Neural Network (CNN) from the TensorFlow Object Detection API is subjected to transfer learning on this customized dataset. The SSD model is able to generate the bounding boxes for each oil palm and also do the counting of palms with good accuracy on the panchromatic images. The detection yielded an F-Score of 83.16 % on seven different images. The detections are buffered and dissolved to generate polygons demarcating the boundaries of the oil palm plantations. This provided the area under the plantations and also gave maps of their location, thereby completing the automated census, with a fairly high accuracy (≈100%). The trained CNN was found competent enough to detect oil palm crowns from images obtained from multiple satellite sensors and of varying temporal vintage. It helped to estimate the increase in oil palm plantations from 2014 to 2021 in the study area. The study proved that high-resolution panchromatic satellite image can successfully be used to undertake census of oil palm plantations using CNNs.

Keywords: object detection, oil palm tree census, panchromatic images, single shot multibox detector

Procedia PDF Downloads 159
17554 Application of Fractional Model Predictive Control to Thermal System

Authors: Aymen Rhouma, Khaled Hcheichi, Sami Hafsi

Abstract:

The article presents an application of Fractional Model Predictive Control (FMPC) to a fractional order thermal system using Controlled Auto Regressive Integrated Moving Average (CARIMA) model obtained by discretization of a continuous fractional differential equation. Moreover, the output deviation approach is exploited to design the K -step ahead output predictor, and the corresponding control law is obtained by solving a quadratic cost function. Experiment results onto a thermal system are presented to emphasize the performances and the effectiveness of the proposed predictive controller.

Keywords: fractional model predictive control, fractional order systems, thermal system, predictive control

Procedia PDF Downloads 409
17553 Optimization of Bifurcation Performance on Pneumatic Branched Networks in next Generation Soft Robots

Authors: Van-Thanh Ho, Hyoungsoon Lee, Jaiyoung Ryu

Abstract:

Efficient pressure distribution within soft robotic systems, specifically to the pneumatic artificial muscle (PAM) regions, is essential to minimize energy consumption. This optimization involves adjusting reservoir pressure, pipe diameter, and branching network layout to reduce flow speed and pressure drop while enhancing flow efficiency. The outcome of this optimization is a lightweight power source and reduced mechanical impedance, enabling extended wear and movement. To achieve this, a branching network system was created by combining pipe components and intricate cross-sectional area variations, employing the principle of minimal work based on a complete virtual human exosuit. The results indicate that modifying the cross-sectional area of the branching network, gradually decreasing it, reduces velocity and enhances momentum compensation, preventing flow disturbances at separation regions. These optimized designs achieve uniform velocity distribution (uniformity index > 94%) prior to entering the connection pipe, with a pressure drop of less than 5%. The design must also consider the length-to-diameter ratio for fluid dynamic performance and production cost. This approach can be utilized to create a comprehensive PAM system, integrating well-designed tube networks and complex pneumatic models.

Keywords: pneumatic artificial muscles, pipe networks, pressure drop, compressible turbulent flow, uniformity flow, murray's law

Procedia PDF Downloads 82
17552 Modelling Sudden Deaths from Myocardial Infarction and Stroke

Authors: Y. S. Yusoff, G. Streftaris, H. R Waters

Abstract:

Death within 30 days is an important factor to be looked into, as there is a significant risk of deaths immediately following or soon after, Myocardial Infarction (MI) or stroke. In this paper, we will model the deaths within 30 days following a Myocardial Infarction (MI) or stroke in the UK. We will see how the probabilities of sudden deaths from MI or stroke have changed over the period 1981-2000. We will model the sudden deaths using a Generalized Linear Model (GLM), fitted using the R statistical package, under a Binomial distribution for the number of sudden deaths. We parameterize our model using the extensive and detailed data from the Framingham Heart Study, adjusted to match UK rates. The results show that there is a reduction for the sudden deaths following a MI over time but no significant improvement for sudden deaths following a stroke.

Keywords: sudden deaths, myocardial infarction, stroke, ischemic heart disease

Procedia PDF Downloads 284
17551 3D Modelling and Numerical Analysis of Human Inner Ear by Means of Finite Elements Method

Authors: C. Castro-Egler, A. Durán-Escalante, A. García-González

Abstract:

This paper presents a method to generate a finite element model of the human auditory inner ear system. The geometric model has been realized using 2D images from a virtual model of temporal bones. A point cloud has been gotten manually from those images to construct a whole mesh with hexahedral elements. The main difference with the predecessor models is the spiral shape of the cochlea with its three scales completely defined: scala tympani, scala media and scala vestibuli; which are separate by basilar membrane and Reissner membrane. To validate this model, numerical simulations have been realised with two models: an isolated inner ear and a whole model of human auditory system. Ideal conditions of displacement are applied over the oval window in the isolated Inner Ear model. The whole model is made up of the outer auditory channel, the tympani, the ossicular chain, and the inner ear. The boundary condition for the whole model is 1Pa over the auditory channel entrance. The numerical simulations by FEM have been done using a harmonic analysis with a frequency range between 100-10.000 Hz with an interval of 100Hz. The following results have been carried out: basilar membrane displacement; the scala media pressure according to the cochlea length and the transfer function of the middle ear normalized with the pressure in the tympanic membrane. The basilar membrane displacements and the pressure in the scala media make it possible to validate the response in frequency of the basilar membrane.

Keywords: finite elements method, human auditory system model, numerical analysis, 3D modelling cochlea

Procedia PDF Downloads 362
17550 Documents Emotions Classification Model Based on TF-IDF Weighting Measure

Authors: Amr Mansour Mohsen, Hesham Ahmed Hassan, Amira M. Idrees

Abstract:

Emotions classification of text documents is applied to reveal if the document expresses a determined emotion from its writer. As different supervised methods are previously used for emotion documents’ classification, in this research we present a novel model that supports the classification algorithms for more accurate results by the support of TF-IDF measure. Different experiments have been applied to reveal the applicability of the proposed model, the model succeeds in raising the accuracy percentage according to the determined metrics (precision, recall, and f-measure) based on applying the refinement of the lexicon, integration of lexicons using different perspectives, and applying the TF-IDF weighting measure over the classifying features. The proposed model has also been compared with other research to prove its competence in raising the results’ accuracy.

Keywords: emotion detection, TF-IDF, WEKA tool, classification algorithms

Procedia PDF Downloads 482
17549 An Automatic Speech Recognition Tool for the Filipino Language Using the HTK System

Authors: John Lorenzo Bautista, Yoon-Joong Kim

Abstract:

This paper presents the development of a Filipino speech recognition tool using the HTK System. The system was trained from a subset of the Filipino Speech Corpus developed by the DSP Laboratory of the University of the Philippines-Diliman. The speech corpus was both used in training and testing the system by estimating the parameters for phonetic HMM-based (Hidden-Markov Model) acoustic models. Experiments on different mixture-weights were incorporated in the study. The phoneme-level word-based recognition of a 5-state HMM resulted in an average accuracy rate of 80.13 for a single-Gaussian mixture model, 81.13 after implementing a phoneme-alignment, and 87.19 for the increased Gaussian-mixture weight model. The highest accuracy rate of 88.70% was obtained from a 5-state model with 6 Gaussian mixtures.

Keywords: Filipino language, Hidden Markov Model, HTK system, speech recognition

Procedia PDF Downloads 478
17548 Prediction of the Torsional Vibration Characteristics of a Rotor-Shaft System Using Its Scale Model and Scaling Laws

Authors: Jia-Jang Wu

Abstract:

This paper presents the scaling laws that provide the criteria of geometry and dynamic similitude between the full-size rotor-shaft system and its scale model, and can be used to predict the torsional vibration characteristics of the full-size rotor-shaft system by manipulating the corresponding data of its scale model. The scaling factors, which play fundamental roles in predicting the geometry and dynamic relationships between the full-size rotor-shaft system and its scale model, for torsional free vibration problems between scale and full-size rotor-shaft systems are firstly obtained from the equation of motion of torsional free vibration. Then, the scaling factor of external force (i.e., torque) required for the torsional forced vibration problems is determined based on the Newton’s second law. Numerical results show that the torsional free and forced vibration characteristics of a full-size rotor-shaft system can be accurately predicted from those of its scale models by using the foregoing scaling factors. For this reason, it is believed that the presented approach will be significant for investigating the relevant phenomenon in the scale model tests.

Keywords: torsional vibration, full-size model, scale model, scaling laws

Procedia PDF Downloads 393
17547 Assessing Firm Readiness to Implement Cloud Computing: Toward a Comprehensive Model

Authors: Seyed Mohammadbagher Jafari, Elahe Mahdizadeh, Masomeh Ghahremani

Abstract:

Nowadays almost all organizations depend on information systems to run their businesses. Investment on information systems and their maintenance to keep them always in best situation to support firm business is one of the main issues for every organization. The new concept of cloud computing was developed as a technical and economic model to address this issue. In cloud computing the computing resources, including networks, applications, hardwares and services are configured as needed and are available at the moment of request. However, migration to cloud is not an easy task and there are many issues that should be taken into account. This study tries to provide a comprehensive model to assess a firm readiness to implement cloud computing. By conducting a systematic literature review, four dimensions of readiness were extracted which include technological, human, organizational and environmental dimensions. Every dimension has various criteria that have been discussed in details. This model provides a framework for cloud computing readiness assessment. Organizations that intend to migrate to cloud can use this model as a tool to assess their firm readiness before making any decision on cloud implementation.

Keywords: cloud computing, human readiness, organizational readiness, readiness assessment model

Procedia PDF Downloads 394
17546 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 74
17545 Model Order Reduction for Frequency Response and Effect of Order of Method for Matching Condition

Authors: Aref Ghafouri, Mohammad javad Mollakazemi, Farhad Asadi

Abstract:

In this paper, model order reduction method is used for approximation in linear and nonlinearity aspects in some experimental data. This method can be used for obtaining offline reduced model for approximation of experimental data and can produce and follow the data and order of system and also it can match to experimental data in some frequency ratios. In this study, the method is compared in different experimental data and influence of choosing of order of the model reduction for obtaining the best and sufficient matching condition for following the data is investigated in format of imaginary and reality part of the frequency response curve and finally the effect and important parameter of number of order reduction in nonlinear experimental data is explained further.

Keywords: frequency response, order of model reduction, frequency matching condition, nonlinear experimental data

Procedia PDF Downloads 400
17544 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna

Abstract:

Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.

Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network

Procedia PDF Downloads 157
17543 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction

Authors: Yan Zhang

Abstract:

Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.

Keywords: Internet of Things, machine learning, predictive maintenance, streaming data

Procedia PDF Downloads 383
17542 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 53
17541 Cost-Effective, Accuracy Preserving Scalar Characterization for mmWave Transceivers

Authors: Mohammad Salah Abdullatif, Salam Hajjar, Paul Khanna

Abstract:

The development of instrument grade mmWave transceivers comes with many challenges. A general rule of thumb is that the performance of the instrument must be higher than the performance of the unit under test in terms of accuracy and stability. The calibration and characterizing of mmWave transceivers are important pillars for testing commercial products. Using a Vector Network Analyzer (VNA) with a mixer option has proven a high performance as an approach to calibrate mmWave transceivers. However, this approach comes with a high cost. In this work, a reduced-cost method to calibrate mmWave transceivers is proposed. A comparison between the proposed method and the VNA technology is provided. A demonstration of significant challenges is discussed, and an approach to meet the requirements is proposed.

Keywords: mmWave transceiver, scalar characterization, coupler connection, magic tee connection, calibration, VNA, vector network analyzer

Procedia PDF Downloads 107
17540 Mechanical Properties and Microstructure of Ultra-High Performance Concrete Containing Fly Ash and Silica Fume

Authors: Jisong Zhang, Yinghua Zhao

Abstract:

The present study investigated the mechanical properties and microstructure of Ultra-High Performance Concrete (UHPC) containing supplementary cementitious materials (SCMs), such as fly ash (FA) and silica fume (SF), and to verify the synergistic effect in the ternary system. On the basis of 30% fly ash replacement, the incorporation of either 10% SF or 20% SF show a better performance compared to the reference sample. The efficiency factor (k-value) was calculated as a synergistic effect to predict the compressive strength of UHPC with these SCMs. The SEM of micrographs and pore volume from BJH method indicate a high correlation with compressive strength. Further, an artificial neural networks model was constructed for prediction of the compressive strength of UHPC containing these SCMs.

Keywords: artificial neural network, fly ash, mechanical properties, ultra-high performance concrete

Procedia PDF Downloads 413
17539 Gulfnet: The Advent of Computer Networking in Saudi Arabia and Its Social Impact

Authors: Abdullah Almowanes

Abstract:

The speed of adoption of new information and communication technologies is often seen as an indicator of the growth of knowledge- and technological innovation-based regional economies. Indeed, technological progress and scientific inquiry in any society have undergone a particularly profound transformation with the introduction of computer networks. In the spring of 1981, the Bitnet network was launched to link thousands of nodes all over the world. In 1985 and as one of the first adopters of Bitnet, Saudi Arabia launched a Bitnet-based network named Gulfnet that linked computer centers, universities, and libraries of Saudi Arabia and other Gulf countries through high speed communication lines. In this paper, the origins and the deployment of Gulfnet are discussed as well as social, economical, political, and cultural ramifications of the new information reality created by the network. Despite its significance, the social and cultural aspects of Gulfnet have not been investigated in history of science and technology literature to a satisfactory degree before. The presented research is based on an extensive archival research aimed at seeking out and analyzing of primary evidence from archival sources and records. During its decade and a half-long existence, Gulfnet demonstrated that the scope and functionality of public computer networks in Saudi Arabia have to be fine-tuned for compliance with Islamic culture and political system of the country. It also helped lay the groundwork for the subsequent introduction of the Internet. Since 1980s, in just few decades, the proliferation of computer networks has transformed communications world-wide.

Keywords: Bitnet, computer networks, computing and culture, Gulfnet, Saudi Arabia

Procedia PDF Downloads 245
17538 The Profit Trend of Cosmetics Products Using Bootstrap Edgeworth Approximation

Authors: Edlira Donefski, Lorenc Ekonomi, Tina Donefski

Abstract:

Edgeworth approximation is one of the most important statistical methods that has a considered contribution in the reduction of the sum of standard deviation of the independent variables’ coefficients in a Quantile Regression Model. This model estimates the conditional median or other quantiles. In this paper, we have applied approximating statistical methods in an economical problem. We have created and generated a quantile regression model to see how the profit gained is connected with the realized sales of the cosmetic products in a real data, taken from a local business. The Linear Regression of the generated profit and the realized sales was not free of autocorrelation and heteroscedasticity, so this is the reason that we have used this model instead of Linear Regression. Our aim is to analyze in more details the relation between the variables taken into study: the profit and the finalized sales and how to minimize the standard errors of the independent variable involved in this study, the level of realized sales. The statistical methods that we have applied in our work are Edgeworth Approximation for Independent and Identical distributed (IID) cases, Bootstrap version of the Model and the Edgeworth approximation for Bootstrap Quantile Regression Model. The graphics and the results that we have presented here identify the best approximating model of our study.

Keywords: bootstrap, edgeworth approximation, IID, quantile

Procedia PDF Downloads 158
17537 A Location-Allocation-Routing Model for a Home Health Care Supply Chain Problem

Authors: Amir Mohammad Fathollahi Fard, Mostafa Hajiaghaei-Keshteli, Mohammad Mahdi Paydar

Abstract:

With increasing life expectancy in developed countries, the role of home care services is highlighted by both academia and industrial contributors in Home Health Care Supply Chain (HHCSC) companies. The main decisions in such supply chain systems are the location of pharmacies, the allocation of patients to these pharmacies and also the routing and scheduling decisions of nurses to visit their patients. In this study, for the first time, an integrated model is proposed to consist of all preliminary and necessary decisions in these companies, namely, location-allocation-routing model. This model is a type of NP-hard one. Therefore, an Imperialist Competitive Algorithm (ICA) is utilized to solve the model, especially in large sizes. Results confirm the efficiency of the developed model for HHCSC companies as well as the performance of employed ICA.

Keywords: home health care supply chain, location-allocation-routing problem, imperialist competitive algorithm, optimization

Procedia PDF Downloads 397
17536 A Study of Behaviors in Using Social Networks of Corporate Personnel of Suan Sunandha Rajabhat University

Authors: Wipada Chaiwchan

Abstract:

This research aims to study behaviors in using social networks of Corporate personnel of Suan Sunandha Rajabhat University. The sample used in the study were two groups: 1) Academic Officer 70 persons and 2) Operation Officer 143 persons were used in this study. The tools in this research consisted of questionnaire which the data were analyzed by using percentage, average (X) and Standard deviation (S.D.) and Independent Sample T-Test to test the difference between the mean values obtained from two independent samples, and One-way anova to analysis of variance, and Multiple comparisons to test that the average pair of different methods by Fisher’s Least Significant Different (LSD). The study result found that the most of corporate personnel have purpose in using social network to information awareness aspect was knowledge and online conference with social media. By using the average more than 3 hours per day in everyday. Using time in working in one day and there are computers connected to the Internet at home, by using the communication in the operational processes. Behaviors using social networks in relation to gender, age, job title, department, and type of personnel. Hypothesis testing, and analysis of variance for the effects of this analysis is divided into three aspects: The use of online social networks, the attitude of the users and the security analysis has found that Corporate Personnel of Suan Sunandha Rajabhat University. Overall and specifically at the high level, and considering each item found all at a high level. By sorting of the social network (X=3.22), The attitude of the users (X= 3.06) and the security (X= 3.11). The overall behaviors using of each side (X=3.11).

Keywords: social network, behaviors, social media, computer information systems

Procedia PDF Downloads 394
17535 A Vision Making Exercise for Twente Region; Development and Assesment

Authors: Gelareh Ghaderi

Abstract:

the overall objective of this study is to develop two alternative plans of spatial and infrastructural development for the Netwerkstad Twente (Twente region) until 2040 and to assess the impacts of those two alternative plans. This region is located on the eastern border of the Netherlands, and it comprises of five municipalities. Based on the strengths and opportunities of the five municipalities of the Netwerkstad Twente, and in order develop the region internationally, strengthen the job market and retain skilled and knowledgeable young population, two alternative visions have been developed; environmental oriented vision, and economical oriented vision. Environmental oriented vision is based mostly on preserving beautiful landscapes. Twente would be recognized as an educational center, driven by green technologies and environment-friendly economy. Market-oriented vision is based on attracting and developing different economic activities in the region based on visions of the five cities of Netwerkstad Twente, in order to improve the competitiveness of the region in national and international scale. On the basis of the two developed visions and strategies for achieving the visions, land use and infrastructural development are modeled and assessed. Based on the SWOT analysis, criteria were formulated and employed in modeling the two contrasting land use visions by the year 2040. Land use modeling consists of determination of future land use demand, assessment of suitability land (Suitability analysis), and allocation of land uses on suitable land. Suitability analysis aims to determine the available supply of land for future development as well as assessing their suitability for specific type of land uses on the basis of the formulated set of criteria. Suitability analysis was operated using CommunityViz, a Planning Support System application for spatially explicit land suitability and allocation. Netwerkstad Twente has highly developed transportation infrastructure, consists of highways network, national road network, regional road network, street network, local road network, railway network and bike-path network. Based on the assumptions of speed limitations on different types of roads provided, infrastructure accessibility level of predicted land use parcels by four different transport modes is investigated. For evaluation of the two development scenarios, the Multi-criteria Evaluation (MCE) method is used. The first step was to determine criteria used for evaluation of each vision. All factors were categorized as economical, ecological and social. Results of Multi-criteria Evaluation show that Environmental oriented cities scenario has higher overall score. Environment-oriented scenario has impressive scores in relation to economical and ecological factors. This is due to the fact that a large percentage of housing tends towards compact housing. Twente region has immense potential, and the success of this project will define the Eastern part of The Netherlands and create a real competitive local economy with innovations and attractive environment as its backbone.

Keywords: economical oriented vision, environmental oriented vision, infrastructure, land use, multi criteria assesment, vision

Procedia PDF Downloads 226
17534 Efficient Frequent Itemset Mining Methods over Real-Time Spatial Big Data

Authors: Hamdi Sana, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, there is a huge increase in the use of spatio-temporal applications where data and queries are continuously moving. As a result, the need to process real-time spatio-temporal data seems clear and real-time stream data management becomes a hot topic. Sliding window model and frequent itemset mining over dynamic data are the most important problems in the context of data mining. Thus, sliding window model for frequent itemset mining is a widely used model for data stream mining due to its emphasis on recent data and its bounded memory requirement. These methods use the traditional transaction-based sliding window model where the window size is based on a fixed number of transactions. Actually, this model supposes that all transactions have a constant rate which is not suited for real-time applications. And the use of this model in such applications endangers their performance. Based on these observations, this paper relaxes the notion of window size and proposes the use of a timestamp-based sliding window model. In our proposed frequent itemset mining algorithm, support conditions are used to differentiate frequents and infrequent patterns. Thereafter, a tree is developed to incrementally maintain the essential information. We evaluate our contribution. The preliminary results are quite promising.

Keywords: real-time spatial big data, frequent itemset, transaction-based sliding window model, timestamp-based sliding window model, weighted frequent patterns, tree, stream query

Procedia PDF Downloads 160
17533 Statistical Analysis and Impact Forecasting of Connected and Autonomous Vehicles on the Environment: Case Study in the State of Maryland

Authors: Alireza Ansariyar, Safieh Laaly

Abstract:

Over the last decades, the vehicle industry has shown increased interest in integrating autonomous, connected, and electrical technologies in vehicle design with the primary hope of improving mobility and road safety while reducing transportation’s environmental impact. Using the State of Maryland (M.D.) in the United States as a pilot study, this research investigates CAVs’ fuel consumption and air pollutants (C.O., PM, and NOx) and utilizes meaningful linear regression models to predict CAV’s environmental effects. Maryland transportation network was simulated in VISUM software, and data on a set of variables were collected through a comprehensive survey. The number of pollutants and fuel consumption were obtained for the time interval 2010 to 2021 from the macro simulation. Eventually, four linear regression models were proposed to predict the amount of C.O., NOx, PM pollutants, and fuel consumption in the future. The results highlighted that CAVs’ pollutants and fuel consumption have a significant correlation with the income, age, and race of the CAV customers. Furthermore, the reliability of four statistical models was compared with the reliability of macro simulation model outputs in the year 2030. The error of three pollutants and fuel consumption was obtained at less than 9% by statistical models in SPSS. This study is expected to assist researchers and policymakers with planning decisions to reduce CAV environmental impacts in M.D.

Keywords: connected and autonomous vehicles, statistical model, environmental effects, pollutants and fuel consumption, VISUM, linear regression models

Procedia PDF Downloads 442
17532 Staying When Everybody Else Is Leaving: Coping with High Out-Migration in Rural Areas of Serbia

Authors: Anne Allmrodt

Abstract:

Regions of South-East Europe are characterised by high out-migration for decades. The reasons for leaving range from the hope of a better work situation to a better health care system and beyond. In Serbia, this high out-migration hits the rural areas in particular so that the population number is in the red repeatedly. It might not be hard to guess that this negative population growth has the potential to create different challenges for those who stay in rural areas. So how are they coping with the – statistically proven – high out-migration? Having this in mind, the study is investigating the people‘s individual awareness of the social phenomenon high out-migration and their daily life strategies in rural areas. Furthermore, the study seeks to find out the people’s resilient skills in that context. Is the condition of high out-migration conducive for resilience? The methodology combines a quantitative and a qualitative approach (mixed methods). For the quantitative part, a standardised questionnaire has been developed, including a multiple choice section and a choice experiment. The questionnaire was handed out to people living in rural areas of Serbia only (n = 100). The sheet included questions about people’s awareness of high out-migration, their own daily life strategies or challenges and their social network situation (data about the social network was necessary here since it is supposed to be an influencing variable for resilience). Furthermore, test persons were asked to make different choices of coping with high out-migration in a self-designed choice experiment. Additionally, the study included qualitative interviews asking citizens from rural areas of Serbia. The topics asked during the interview focused on their awareness of high out-migration, their daily life strategies, and challenges as well as their social network situation. Results have shown the following major findings. The awareness of high out-migration is not the same with all test persons. Some declare it as something positive for their own life, others as negative or not effecting at all. The way of coping generally depended – maybe not surprising – on the people’s social network. However – and this might be the most important finding - not everybody with a certain number of contacts had better coping strategies and was, therefore, more resilient. Here the results show that especially people with high affiliation and proximity inside their network were able to cope better and shew higher resilience skills. The study took one step forward in terms of knowledge about societal resilience as well as coping strategies of societies in rural areas. It has shown part of the other side of nowadays migration‘s coin and gives a hint for a more sustainable rural development and community empowerment.

Keywords: coping, out-migration, resilience, rural development, social networks, south-east Europe

Procedia PDF Downloads 128
17531 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal

Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader

Abstract:

DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.

Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform

Procedia PDF Downloads 78
17530 Masked Candlestick Model: A Pre-Trained Model for Trading Prediction

Authors: Ling Qi, Matloob Khushi, Josiah Poon

Abstract:

This paper introduces a pre-trained Masked Candlestick Model (MCM) for trading time-series data. The pre-trained model is based on three core designs. First, we convert trading price data at each data point as a set of normalized elements and produce embeddings of each element. Second, we generate a masked sequence of such embedded elements as inputs for self-supervised learning. Third, we use the encoder mechanism from the transformer to train the inputs. The masked model learns the contextual relations among the sequence of embedded elements, which can aid downstream classification tasks. To evaluate the performance of the pre-trained model, we fine-tune MCM for three different downstream classification tasks to predict future price trends. The fine-tuned models achieved better accuracy rates for all three tasks than the baseline models. To better analyze the effectiveness of MCM, we test the same architecture for three currency pairs, namely EUR/GBP, AUD/USD, and EUR/JPY. The experimentation results demonstrate MCM’s effectiveness on all three currency pairs and indicate the MCM’s capability for signal extraction from trading data.

Keywords: masked language model, transformer, time series prediction, trading prediction, embedding, transfer learning, self-supervised learning

Procedia PDF Downloads 123