Search results for: deep graphical model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18104

Search results for: deep graphical model

16904 Generalized Extreme Value Regression with Binary Dependent Variable: An Application for Predicting Meteorological Drought Probabilities

Authors: Retius Chifurira

Abstract:

Logistic regression model is the most used regression model to predict meteorological drought probabilities. When the dependent variable is extreme, the logistic model fails to adequately capture drought probabilities. In order to adequately predict drought probabilities, we use the generalized linear model (GLM) with the quantile function of the generalized extreme value distribution (GEVD) as the link function. The method maximum likelihood estimation is used to estimate the parameters of the generalized extreme value (GEV) regression model. We compare the performance of the logistic and the GEV regression models in predicting drought probabilities for Zimbabwe. The performance of the regression models are assessed using the goodness-of-fit tests, namely; relative root mean square error (RRMSE) and relative mean absolute error (RMAE). Results show that the GEV regression model performs better than the logistic model, thereby providing a good alternative candidate for predicting drought probabilities. This paper provides the first application of GLM derived from extreme value theory to predict drought probabilities for a drought-prone country such as Zimbabwe.

Keywords: generalized extreme value distribution, general linear model, mean annual rainfall, meteorological drought probabilities

Procedia PDF Downloads 188
16903 2D Surface Flow Model in The Biebrza Floodplain

Authors: Dorota Miroslaw-Swiatek, Mateusz Grygoruk, Sylwia Szporak

Abstract:

We applied a two-dimensional surface water flow model with irregular wet boundaries. In this model, flow equations are in the form of a 2-D, non-linear diffusion equations which allows to account spatial variations in flow resistance and topography. Calculation domain to simulate the flow pattern in the floodplain is congruent with a Digital Elevation Model (DEM) grid. The rate and direction of sheet flow in wetlands is affected by vegetation type and density, therefore the developed model take into account spatial distribution vegetation resistance to the water flow. The model was tested in a part of the Biebrza Valley, of an outstanding heterogeneity in the elevation and flow resistance distributions due to various ecohydrological conditions and management measures. In our approach we used the highest-possible quality of the DEM in order to obtain hydraulic slopes and vegetation distribution parameters for the modelling. The DEM was created from the cloud of points measured in the LiDAR technology. The LiDAR reflects both the land surface as well as all objects on top of it such as vegetation. Depending on the density of vegetation cover the ability of laser penetration is variable. Therefore to obtain accurate land surface model the “vegetation effect” was corrected using data collected in the field (mostly the vegetation height) and satellite imagery such as Ikonos (to distinguish different vegetation types of the floodplain and represent them spatially). Model simulation was performed for the spring thaw flood in 2009.

Keywords: floodplain flow, Biebrza valley, model simulation, 2D surface flow model

Procedia PDF Downloads 490
16902 A Study of Mode Choice Model Improvement Considering Age Grouping

Authors: Young-Hyun Seo, Hyunwoo Park, Dong-Kyu Kim, Seung-Young Kho

Abstract:

The purpose of this study is providing an improved mode choice model considering parameters including age grouping of prime-aged and old age. In this study, 2010 Household Travel Survey data were used and improper samples were removed through the analysis. Chosen alternative, date of birth, mode, origin code, destination code, departure time, and arrival time are considered from Household Travel Survey. By preprocessing data, travel time, travel cost, mode, and ratio of people aged 45 to 55 years, 55 to 65 years and over 65 years were calculated. After the manipulation, the mode choice model was constructed using LIMDEP by maximum likelihood estimation. A significance test was conducted for nine parameters, three age groups for three modes. Then the test was conducted again for the mode choice model with significant parameters, travel cost variable and travel time variable. As a result of the model estimation, as the age increases, the preference for the car decreases and the preference for the bus increases. This study is meaningful in that the individual and households characteristics are applied to the aggregate model.

Keywords: age grouping, aging, mode choice model, multinomial logit model

Procedia PDF Downloads 315
16901 Multilevel Modeling of the Progression of HIV/AIDS Disease among Patients under HAART Treatment

Authors: Awol Seid Ebrie

Abstract:

HIV results as an incurable disease, AIDS. After a person is infected with virus, the virus gradually destroys all the infection fighting cells called CD4 cells and makes the individual susceptible to opportunistic infections which cause severe or fatal health problems. Several studies show that the CD4 cells count is the most determinant indicator of the effectiveness of the treatment or progression of the disease. The objective of this paper is to investigate the progression of the disease over time among patient under HAART treatment. Two main approaches of the generalized multilevel ordinal models; namely the proportional odds model and the nonproportional odds model have been applied to the HAART data. Also, the multilevel part of both models includes random intercepts and random coefficients. In general, four models are explored in the analysis and then the models are compared using the deviance information criteria. Of these models, the random coefficients nonproportional odds model is selected as the best model for the HAART data used as it has the smallest DIC value. The selected model shows that the progression of the disease increases as the time under the treatment increases. In addition, it reveals that gender, baseline clinical stage and functional status of the patient have a significant association with the progression of the disease.

Keywords: nonproportional odds model, proportional odds model, random coefficients model, random intercepts model

Procedia PDF Downloads 411
16900 Metamorphic Computer Virus Classification Using Hidden Markov Model

Authors: Babak Bashari Rad

Abstract:

A metamorphic computer virus uses different code transformation techniques to mutate its body in duplicated instances. Characteristics and function of new instances are mostly similar to their parents, but they cannot be easily detected by the majority of antivirus in market, as they depend on string signature-based detection techniques. The purpose of this research is to propose a Hidden Markov Model for classification of metamorphic viruses in executable files. In the proposed solution, portable executable files are inspected to extract the instructions opcodes needed for the examination of code. A Hidden Markov Model trained on portable executable files is employed to classify the metamorphic viruses of the same family. The proposed model is able to generate and recognize common statistical features of mutated code. The model has been evaluated by examining the model on a test data set. The performance of the model has been practically tested and evaluated based on False Positive Rate, Detection Rate and Overall Accuracy. The result showed an acceptable performance with high average of 99.7% Detection Rate.

Keywords: malware classification, computer virus classification, metamorphic virus, metamorphic malware, Hidden Markov Model

Procedia PDF Downloads 305
16899 UML Model for Double-Loop Control Self-Adaptive Braking System

Authors: Heung Sun Yoon, Jong Tae Kim

Abstract:

In this paper, we present an activity diagram model for double-loop control self-adaptive braking system. Since activity diagram helps to improve visibility of self-adaption, we can easily find where improvement is needed on double-loop control. Double-loop control is adopted since the design conditions and actual conditions can be different. The system is reconfigured in runtime by using double-loop control. We simulated to verify and validate our model by using MATLAB. We compared single-loop control model with double-loop control model. Simulation results show that double-loop control provides more consistent brake power control than single-loop control.

Keywords: activity diagram, automotive, braking system, double-loop, self-adaptive, UML, vehicle

Procedia PDF Downloads 401
16898 Digital Reconstruction of Museum's Statue Using 3D Scanner for Cultural Preservation in Indonesia

Authors: Ahmad Zaini, F. Muhammad Reza Hadafi, Surya Sumpeno, Muhtadin, Mochamad Hariadi

Abstract:

The lack of information about museum’s collection reduces the number of visits of museum. Museum’s revitalization is an urgent activity to increase the number of visits. The research's roadmap is building a web-based application that visualizes museum in the virtual form including museum's statue reconstruction in the form of 3D. This paper describes implementation of three-dimensional model reconstruction method based on light-strip pattern on the museum statue using 3D scanner. Noise removal, alignment, meshing and refinement model's processes is implemented to get a better 3D object reconstruction. Model’s texture derives from surface texture mapping between object's images with reconstructed 3D model. Accuracy test of dimension of the model is measured by calculating relative error of virtual model dimension compared against the original object. The result is realistic three-dimensional model textured with relative error around 4.3% to 5.8%.

Keywords: 3D reconstruction, light pattern structure, texture mapping, museum

Procedia PDF Downloads 452
16897 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection

Authors: Ali Hamza

Abstract:

Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.

Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network

Procedia PDF Downloads 68
16896 Investigation of the Effects of Gamma Radiation on the Electrically Active Defects in InAs/InGaAs Quantum Dots Laser Structures Grown by Molecular Beam Epitaxy on GaAs Substrates Using Deep Level Transient Spectroscopy

Authors: M. Al Huwayz, A. Salhi, S. Alhassan, S. Alotaibi, A. Almalki, M.Almunyif, A. Alhassni, M. Henini

Abstract:

Recently, there has been much research carried out to investigate quantum dots (QDs) lasers with the aim to increase the gain of quantum well lasers. However, one of the difficulties with these structures is that electrically active defects can lead to serious issues in the performance of these devices. It is therefore essential to fully understand the types of defects introduced during the growth and/or the fabrication process. In this study, the effects of Gamma radiation on the electrically active defects in p-i-n InAs/InGaAsQDs laser structures grown by Molecular Beam Epitaxy (MBE) technique on GaAs substrates were investigated. Deep Level Transient Spectroscopy (DLTS), current-voltage (I-V), and capacitance-voltage (C-V) measurements were performed to explore these effects on the electrical properties of these QDs lasers. I-V measurements showed that as-grown sample had better electrical properties than the irradiated sample. However, DLTS and Laplace DLTS measurements at different reverse biases revealed that the defects in the-region of the p-i-n structures were decreased in the irradiated sample. In both samples, a trap with an activation energy of ~ 0.21 eV was assigned to the well-known defect M1 in GaAs layers

Keywords: quantum dots laser structures, gamma radiation, DLTS, defects, nAs/IngaAs

Procedia PDF Downloads 178
16895 Time Estimation of Return to Sports Based on Classification of Health Levels of Anterior Cruciate Ligament Using a Convolutional Neural Network after Reconstruction Surgery

Authors: Zeinab Jafari A., Ali Sharifnezhad B., Mohammad Razi C., Mohammad Haghpanahi D., Arash Maghsoudi

Abstract:

Background and Objective: Sports-related rupture of the anterior cruciate ligament (ACL) and following injuries have been associated with various disorders, such as long-lasting changes in muscle activation patterns in athletes, which might last after ACL reconstruction (ACLR). The rupture of the ACL might result in abnormal patterns of movement execution, extending the treatment period and delaying athletes’ return to sports (RTS). As ACL injury is especially prevalent among athletes, the lengthy treatment process and athletes’ absence from sports are of great concern to athletes and coaches. Thus, estimating safe time of RTS is of crucial importance. Therefore, using a deep neural network (DNN) to classify the health levels of ACL in injured athletes, this study aimed to estimate the safe time for athletes to return to competitions. Methods: Ten athletes with ACLR and fourteen healthy controls participated in this study. Three health levels of ACL were defined: healthy, six-month post-ACLR surgery and nine-month post-ACLR surgery. Athletes with ACLR were tested six and nine months after the ACLR surgery. During the course of this study, surface electromyography (sEMG) signals were recorded from five knee muscles, namely Rectus Femoris (RF), Vastus Lateralis (VL), Vastus Medialis (VM), Biceps Femoris (BF), Semitendinosus (ST), during single-leg drop landing (SLDL) and forward hopping (SLFH) tasks. The Pseudo-Wigner-Ville distribution (PWVD) was used to produce three-dimensional (3-D) images of the energy distribution patterns of sEMG signals. Then, these 3-D images were converted to two-dimensional (2-D) images implementing the heat mapping technique, which were then fed to a deep convolutional neural network (DCNN). Results: In this study, we estimated the safe time of RTS by designing a DCNN classifier with an accuracy of 90 %, which could classify ACL into three health levels. Discussion: The findings of this study demonstrate the potential of the DCNN classification technique using sEMG signals in estimating RTS time, which will assist in evaluating the recovery process of ACLR in athletes.

Keywords: anterior cruciate ligament reconstruction, return to sports, surface electromyography, deep convolutional neural network

Procedia PDF Downloads 62
16894 The Application of FSI Techniques in Modeling of Realist Pulmonary Systems

Authors: Abdurrahim Bolukbasi, Hassan Athari, Dogan Ciloglu

Abstract:

The modeling lung respiratory system which has complex anatomy and biophysics presents several challenges including tissue-driven flow patterns and wall motion. Also, the lung pulmonary system because of that they stretch and recoil with each breath, has not static walls and structures. The direct relationship between air flow and tissue motion in the lung structures naturally prefers an FSI simulation technique. Therefore, in order to toward the realistic simulation of pulmonary breathing mechanics the development of a coupled FSI computational model is an important step. A simple but physiologically-relevant three dimensional deep long geometry is designed and fluid-structure interaction (FSI) coupling technique is utilized for simulating the deformation of the lung parenchyma tissue which produces airflow fields. The real understanding of respiratory tissue system as a complex phenomenon have been investigated with respect to respiratory patterns, fluid dynamics and tissue visco-elasticity and tidal breathing period.

Keywords: lung deformation and mechanics; Tissue mechanics; Viscoelasticity; Fluid-structure interactions; ANSYS

Procedia PDF Downloads 307
16893 Evaluation of Turbulence Modelling of Gas-Liquid Two-Phase Flow in a Venturi

Authors: Mengke Zhan, Cheng-Gang Xie, Jian-Jun Shu

Abstract:

A venturi flowmeter is a common device used in multiphase flow rate measurement in the upstream oil and gas industry. Having a robust computational model for multiphase flow in a venturi is desirable for understanding the gas-liquid and fluid-pipe interactions and predicting pressure and phase distributions under various flow conditions. A steady Eulerian-Eulerian framework is used to simulate upward gas-liquid flow in a vertical venturi. The simulation results are compared with experimental measurements of venturi differential pressure and chord-averaged gas holdup in the venturi throat section. The choice of turbulence model is nontrivial in the multiphase flow modelling in a venturi. The performance cross-comparison of the k-ϵ model, Reynolds stress model (RSM) and shear-stress transport (SST) k-ω turbulence model is made in the study. In terms of accuracy and computational cost, the SST k-ω turbulence model is observed to be the most efficient.

Keywords: computational fluid dynamics (CFD), gas-liquid flow, turbulence modelling, venturi

Procedia PDF Downloads 161
16892 Evaluation of High Damping Rubber Considering Initial History through Dynamic Loading Test and Program Analysis

Authors: Kyeong Hoon Park, Taiji Mazuda

Abstract:

High damping rubber (HDR) bearings are dissipating devices mainly used in seismic isolation systems and have a great damping performance. Although many studies have been conducted on the dynamic model of HDR bearings, few models can reflect phenomena such as dependency of experienced shear strain on initial history. In order to develop a model that can represent the dependency of experienced shear strain of HDR by Mullins effect, dynamic loading test was conducted using HDR specimen. The reaction of HDR was measured by applying a horizontal vibration using a hybrid actuator under a constant vertical load. Dynamic program analysis was also performed after dynamic loading test. The dynamic model applied in program analysis is a bilinear type double-target model. This model is modified from typical bilinear model. This model can express the nonlinear characteristics related to the initial history of HDR bearings. Based on the dynamic loading test and program analysis results, equivalent stiffness and equivalent damping ratio were calculated to evaluate the mechanical properties of HDR and the feasibility of the bilinear type double-target model was examined.

Keywords: base-isolation, bilinear model, high damping rubber, loading test

Procedia PDF Downloads 114
16891 Analysis of Reliability of Mining Shovel Using Weibull Model

Authors: Anurag Savarnya

Abstract:

The reliability of the various parts of electric mining shovel has been assessed through the application of Weibull Model. The study was initiated to find reliability of components of electric mining shovel. The paper aims to optimize the reliability of components and increase the life cycle of component. A multilevel decomposition of the electric mining shovel was done and maintenance records were used to evaluate the failure data and appropriate system characterization was done to model the system in terms of reasonable number of components. The approach used develops a mathematical model to assess the reliability of the electric mining shovel components. The model can be used to predict reliability of components of the hydraulic mining shovel and system performance. Reliability is an inherent attribute to a system. When the life-cycle costs of a system are being analyzed, reliability plays an important role as a major driver of these costs and has considerable influence on system performance. It is an iterative process that begins with specification of reliability goals consistent with cost and performance objectives. The data were collected from an Indian open cast coal mine and the reliability of various components of the electric mining shovel has been assessed by following a Weibull Model.

Keywords: reliability, Weibull model, electric mining shovel

Procedia PDF Downloads 496
16890 Glaucoma Detection in Retinal Tomography Using the Vision Transformer

Authors: Sushish Baral, Pratibha Joshi, Yaman Maharjan

Abstract:

Glaucoma is a chronic eye condition that causes vision loss that is irreversible. Early detection and treatment are critical to prevent vision loss because it can be asymptomatic. For the identification of glaucoma, multiple deep learning algorithms are used. Transformer-based architectures, which use the self-attention mechanism to encode long-range dependencies and acquire extremely expressive representations, have recently become popular. Convolutional architectures, on the other hand, lack knowledge of long-range dependencies in the image due to their intrinsic inductive biases. The aforementioned statements inspire this thesis to look at transformer-based solutions and investigate the viability of adopting transformer-based network designs for glaucoma detection. Using retinal fundus images of the optic nerve head to develop a viable algorithm to assess the severity of glaucoma necessitates a large number of well-curated images. Initially, data is generated by augmenting ocular pictures. After that, the ocular images are pre-processed to make them ready for further processing. The system is trained using pre-processed images, and it classifies the input images as normal or glaucoma based on the features retrieved during training. The Vision Transformer (ViT) architecture is well suited to this situation, as it allows the self-attention mechanism to utilise structural modeling. Extensive experiments are run on the common dataset, and the results are thoroughly validated and visualized.

Keywords: glaucoma, vision transformer, convolutional architectures, retinal fundus images, self-attention, deep learning

Procedia PDF Downloads 182
16889 R Software for Parameter Estimation of Spatio-Temporal Model

Authors: Budi Nurani Ruchjana, Atje Setiawan Abdullah, I. Gede Nyoman Mindra Jaya, Eddy Hermawan

Abstract:

In this paper, we propose the application package to estimate parameters of spatiotemporal model based on the multivariate time series analysis using the R open-source software. We build packages mainly to estimate the parameters of the Generalized Space Time Autoregressive (GSTAR) model. GSTAR is a combination of time series and spatial models that have parameters vary per location. We use the method of Ordinary Least Squares (OLS) and use the Mean Average Percentage Error (MAPE) to fit the model to spatiotemporal real phenomenon. For case study, we use oil production data from volcanic layer at Jatibarang Indonesia or climate data such as rainfall in Indonesia. Software R is very user-friendly and it is making calculation easier, processing the data is accurate and faster. Limitations R script for the estimation of model parameters spatiotemporal GSTAR built is still limited to a stationary time series model. Therefore, the R program under windows can be developed either for theoretical studies and application.

Keywords: GSTAR Model, MAPE, OLS method, oil production, R software

Procedia PDF Downloads 232
16888 Forecasting Models for Steel Demand Uncertainty Using Bayesian Methods

Authors: Watcharin Sangma, Onsiri Chanmuang, Pitsanu Tongkhow

Abstract:

A forecasting model for steel demand uncertainty in Thailand is proposed. It consists of trend, autocorrelation, and outliers in a hierarchical Bayesian frame work. The proposed model uses a cumulative Weibull distribution function, latent first-order autocorrelation, and binary selection, to account for trend, time-varying autocorrelation, and outliers, respectively. The Gibbs sampling Markov Chain Monte Carlo (MCMC) is used for parameter estimation. The proposed model is applied to steel demand index data in Thailand. The root mean square error (RMSE), mean absolute percentage error (MAPE), and mean absolute error (MAE) criteria are used for model comparison. The study reveals that the proposed model is more appropriate than the exponential smoothing method.

Keywords: forecasting model, steel demand uncertainty, hierarchical Bayesian framework, exponential smoothing method

Procedia PDF Downloads 343
16887 Developing Fuzzy Logic Model for Reliability Estimation: Case Study

Authors: Soroor K. H. Al-Khafaji, Manal Mohammad Abed

Abstract:

The research aim of this paper is to evaluate the reliability of a complex engineering system and to design a fuzzy model for the reliability estimation. The designed model has been applied on Vegetable Oil Purification System (neutralization system) to help the specialist user based on the concept of FMEA (Failure Mode and Effect Analysis) to estimate the reliability of the repairable system at the vegetable oil industry. The fuzzy model has been used to predict the system reliability for a future time period, depending on a historical database for the two past years. The model can help to specify the system malfunctions and to predict its reliability during a future period in more accurate and reasonable results compared with the results obtained by the traditional method of reliability estimation.

Keywords: fuzzy logic, reliability, repairable systems, FMEA

Procedia PDF Downloads 600
16886 Developing a Systems Dynamics Model for Security Management

Authors: Kuan-Chou Chen

Abstract:

This paper will demonstrate a simulation model of an information security system by using the systems dynamic approach. The relationships in the system model are designed to be simple and functional and do not necessarily represent any particular information security environments. The purpose of the paper aims to develop a generic system dynamic information security system model with implications on information security research. The interrelated and interdependent relationships of five primary sectors in the system dynamic model will be presented in this paper. The integrated information security systems model will include (1) information security characteristics, (2) users, (3) technology, (4) business functions, and (5) policy and management. Environments, attacks, government and social culture will be defined as the external sector. The interactions within each of these sectors will be depicted by system loop map as well. The proposed system dynamic model will not only provide a conceptual framework for information security analysts and designers but also allow information security managers to remove the incongruity between the management of risk incidents and the management of knowledge and further support information security managers and decision makers the foundation for managerial actions and policy decisions.

Keywords: system thinking, information security systems, security management, simulation

Procedia PDF Downloads 416
16885 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks

Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci

Abstract:

The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.

Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization

Procedia PDF Downloads 115
16884 Location Quotients Model in Turkey’s Provinces and Nuts II Regions

Authors: Semih Sözer

Abstract:

One of the most common issues in economic systems is understanding characteristics of economic activities in cities and regions. Although there are critics to economic base models in conceptual and empirical aspects, these models are useful tools to examining the economic structure of a nation, regions or cities. This paper uses one of the methodologies of economic base models namely the location quotients model. Data for this model includes employment numbers of provinces and NUTS II regions in Turkey. Time series of data covers the years of 1990, 2000, 2003, and 2009. Aim of this study is finding which sectors are export-base and which sectors are import-base in provinces and regions. Model results show that big provinces or powerful regions (population, size etc.) mostly have basic sectors in their economic system. However, interesting facts came from different sectors in different provinces and regions in the model results.

Keywords: economic base, location quotients model, regional economics, regional development

Procedia PDF Downloads 417
16883 Media Richness Perspective on Web 2.0 Usage for Knowledge Creation: The Case of the Cocoa Industry in Ghana

Authors: Albert Gyamfi

Abstract:

Cocoa plays critical role in the socio-economic development of Ghana. Meanwhile, smallholder farmers most of whom are illiterate dominate the industry. According to the cocoa-based agricultural knowledge and information system (AKIS) model knowledge is created and transferred to the industry between three key actors: cocoa researchers, extension experts, and cocoa farmers. Dwelling on the SECI model, the media richness theory (MRT), and the AKIS model, a conceptual model of web 2.0-based AKIS model (AKIS 2.0) is developed and used to assess the possible effects of social media usage for knowledge creation in the Ghanaian cocoa industry. A mixed method approach with a survey questionnaire was employed, and a second-order multi-group structural equation model (SEM) was used to analyze the data. The study concludes that the use of web 2.0 applications for knowledge creation would lead to sustainable interactions among the key knowledge actors for effective knowledge creation in the cocoa industry in Ghana.

Keywords: agriculture, cocoa, knowledge, media, web 2.0

Procedia PDF Downloads 320
16882 Artificial Neural Network Based Approach for Estimation of Individual Vehicle Speed under Mixed Traffic Condition

Authors: Subhadip Biswas, Shivendra Maurya, Satish Chandra, Indrajit Ghosh

Abstract:

Developing speed model is a challenging task particularly under mixed traffic condition where the traffic composition plays a significant role in determining vehicular speed. The present research has been conducted to model individual vehicular speed in the context of mixed traffic on an urban arterial. Traffic speed and volume data have been collected from three midblock arterial road sections in New Delhi. Using the field data, a volume based speed prediction model has been developed adopting the methodology of Artificial Neural Network (ANN). The model developed in this work is capable of estimating speed for individual vehicle category. Validation results show a great deal of agreement between the observed speeds and the predicted values by the model developed. Also, it has been observed that the ANN based model performs better compared to other existing models in terms of accuracy. Finally, the sensitivity analysis has been performed utilizing the model in order to examine the effects of traffic volume and its composition on individual speeds.

Keywords: speed model, artificial neural network, arterial, mixed traffic

Procedia PDF Downloads 377
16881 Encephalon-An Implementation of a Handwritten Mathematical Expression Solver

Authors: Shreeyam, Ranjan Kumar Sah, Shivangi

Abstract:

Recognizing and solving handwritten mathematical expressions can be a challenging task, particularly when certain characters are segmented and classified. This project proposes a solution that uses Convolutional Neural Network (CNN) and image processing techniques to accurately solve various types of equations, including arithmetic, quadratic, and trigonometric equations, as well as logical operations like logical AND, OR, NOT, NAND, XOR, and NOR. The proposed solution also provides a graphical solution, allowing users to visualize equations and their solutions. In addition to equation solving, the platform, called CNNCalc, offers a comprehensive learning experience for students. It provides educational content, a quiz platform, and a coding platform for practicing programming skills in different languages like C, Python, and Java. This all-in-one solution makes the learning process engaging and enjoyable for students. The proposed methodology includes horizontal compact projection analysis and survey for segmentation and binarization, as well as connected component analysis and integrated connected component analysis for character classification. The compact projection algorithm compresses the horizontal projections to remove noise and obtain a clearer image, contributing to the accuracy of character segmentation. Experimental results demonstrate the effectiveness of the proposed solution in solving a wide range of mathematical equations. CNNCalc provides a powerful and user-friendly platform for solving equations, learning, and practicing programming skills. With its comprehensive features and accurate results, CNNCalc is poised to revolutionize the way students learn and solve mathematical equations. The platform utilizes a custom-designed Convolutional Neural Network (CNN) with image processing techniques to accurately recognize and classify symbols within handwritten equations. The compact projection algorithm effectively removes noise from horizontal projections, leading to clearer images and improved character segmentation. Experimental results demonstrate the accuracy and effectiveness of the proposed solution in solving a wide range of equations, including arithmetic, quadratic, trigonometric, and logical operations. CNNCalc features a user-friendly interface with a graphical representation of equations being solved, making it an interactive and engaging learning experience for users. The platform also includes tutorials, testing capabilities, and programming features in languages such as C, Python, and Java. Users can track their progress and work towards improving their skills. CNNCalc is poised to revolutionize the way students learn and solve mathematical equations with its comprehensive features and accurate results.

Keywords: AL, ML, hand written equation solver, maths, computer, CNNCalc, convolutional neural networks

Procedia PDF Downloads 106
16880 Deep Learning-Based Object Detection on Low Quality Images: A Case Study of Real-Time Traffic Monitoring

Authors: Jean-Francois Rajotte, Martin Sotir, Frank Gouineau

Abstract:

The installation and management of traffic monitoring devices can be costly from both a financial and resource point of view. It is therefore important to take advantage of in-place infrastructures to extract the most information. Here we show how low-quality urban road traffic images from cameras already available in many cities (such as Montreal, Vancouver, and Toronto) can be used to estimate traffic flow. To this end, we use a pre-trained neural network, developed for object detection, to count vehicles within images. We then compare the results with human annotations gathered through crowdsourcing campaigns. We use this comparison to assess performance and calibrate the neural network annotations. As a use case, we consider six months of continuous monitoring over hundreds of cameras installed in the city of Montreal. We compare the results with city-provided manual traffic counting performed in similar conditions at the same location. The good performance of our system allows us to consider applications which can monitor the traffic conditions in near real-time, making the counting usable for traffic-related services. Furthermore, the resulting annotations pave the way for building a historical vehicle counting dataset to be used for analysing the impact of road traffic on many city-related issues, such as urban planning, security, and pollution.

Keywords: traffic monitoring, deep learning, image annotation, vehicles, roads, artificial intelligence, real-time systems

Procedia PDF Downloads 184
16879 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction

Authors: Mingxin Li, Liya Ni

Abstract:

Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.

Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning

Procedia PDF Downloads 128
16878 Performance of Rapid Impact Compaction as a Middle-Deep Ground Improvement Technique

Authors: Bashar Tarawneh, Yasser Hakam

Abstract:

Rapid Impact Compaction (RIC) is a modern dynamic compaction device mainly used to compact sandy soils, where silt and clay contents are low. The device uses the piling hammer technology to increase the bearing capacity of soils through controlled impacts. The RIC device uses "controlled impact compaction" of the ground using a 9-ton hammer dropped from the height between 0.3 m to 1.2 m onto a 1.5 m diameter steel patent foot. The delivered energy is about 26,487 to 105,948 Joules per drop. To evaluate the performance of this technique, three project sites in the United Arab Emirates were improved using RIC. In those sites, a loose to very loose fine to medium sand was encountered at a depth ranging from 1.0m to 4.0m below the ground level. To evaluate the performance of the RIC, Cone Penetration Tests (CPT) were carried out before and after improvement. Also, load tests were carried out post-RIC work to assess the settlements and bearing capacity. The soil was improved to a depth of about 5.0m below the ground level depending on the CPT friction ratio (the ratio between sleeve friction and tip resistance). CPT tip resistance was significantly increased post ground improvement work. Load tests showed enhancement in the soil bearing capacity and reduction in the potential settlements. This study demonstrates the successful application of the RIC for middle-deep improvement and compaction of the ground. Foundation design criteria were achieved in all site post-RIC work.

Keywords: compaction, RIC, ground improvement, CPT

Procedia PDF Downloads 357
16877 Modeling Heat-Related Mortality Based on Greenhouse Emissions in OECD Countries

Authors: Anderson Ngowa Chembe, John Olukuru

Abstract:

Greenhouse emissions by human activities are known to irreversibly increase global temperatures through the greenhouse effect. This study seeks to propose a mortality model with sensitivity to heat-change effects as one of the underlying parameters in the model. As such, the study sought to establish the relationship between greenhouse emissions and mortality indices in five OECD countries (USA, UK, Japan, Canada & Germany). Upon the establishment of the relationship using correlation analysis, an additional parameter that accounts for the sensitivity of heat-changes to mortality rates was incorporated in the Lee-Carter model. Based on the proposed model, new parameter estimates were calculated using iterative algorithms for optimization. Finally, the goodness of fit for the original Lee-Carter model and the proposed model were compared using deviance comparison. The proposed model provides a better fit to mortality rates especially in USA, UK and Germany where the mortality indices have a strong positive correlation with the level of greenhouse emissions. The results of this study are of particular importance to actuaries, demographers and climate-risk experts who seek to use better mortality-modeling techniques in the wake of heat effects caused by increased greenhouse emissions.

Keywords: climate risk, greenhouse emissions, Lee-Carter model, OECD

Procedia PDF Downloads 331
16876 Design Channel Non Persistent CSMA MAC Protocol Model for Complex Wireless Systems Based on SoC

Authors: Ibrahim A. Aref, Tarek El-Mihoub, Khadiga Ben Musa

Abstract:

This paper presents Carrier Sense Multiple Access (CSMA) communication model based on SoC design methodology. Such model can be used to support the modelling of the complex wireless communication systems, therefore use of such communication model is an important technique in the construction of high performance communication. SystemC has been chosen because it provides a homogeneous design flow for complex designs (i.e. SoC and IP based design). We use a swarm system to validate CSMA designed model and to show how advantages of incorporating communication early in the design process. The wireless communication created through the modeling of CSMA protocol that can be used to achieve communication between all the agents and to coordinate access to the shared medium (channel).

Keywords: systemC, modelling, simulation, CSMA

Procedia PDF Downloads 412
16875 Model of Transhipment and Routing Applied to the Cargo Sector in Small and Medium Enterprises of Bogotá, Colombia

Authors: Oscar Javier Herrera Ochoa, Ivan Dario Romero Fonseca

Abstract:

This paper presents a design of a model for planning the distribution logistics operation. The significance of this work relies on the applicability of this fact to the analysis of small and medium enterprises (SMEs) of dry freight in Bogotá. Two stages constitute this implementation: the first one is the place where optimal planning is achieved through a hybrid model developed with mixed integer programming, which considers the transhipment operation based on a combined load allocation model as a classic transshipment model; the second one is the specific routing of that operation through the heuristics of Clark and Wright. As a result, an integral model is obtained to carry out the step by step planning of the distribution of dry freight for SMEs in Bogotá. In this manner, optimum assignments are established by utilizing transshipment centers with that purpose of determining the specific routing based on the shortest distance traveled.

Keywords: transshipment model, mixed integer programming, saving algorithm, dry freight transportation

Procedia PDF Downloads 212