Search results for: dynamic network analysis
29964 Stabilization Control of the Nonlinear AIDS Model Based on the Theory of Polynomial Fuzzy Control Systems
Authors: Shahrokh Barati
Abstract:
In this paper, we introduced AIDS disease at first, then proposed dynamic model illustrate its progress, after expression of a short history of nonlinear modeling by polynomial phasing systems, we considered the stability conditions of the systems, which contained a huge amount of researches in order to modeling and control of AIDS in dynamic nonlinear form, in this approach using a frame work of control any polynomial phasing modeling system which have been generalized by part of phasing model of T-S, in order to control the system in better way, the stability conditions were achieved based on polynomial functions, then we focused to design the appropriate controller, firstly we considered the equilibrium points of system and their conditions and in order to examine changes in the parameters, we presented polynomial phase model that was the generalized approach rather than previous Takagi Sugeno models, then with using case we evaluated the equations in both open loop and close loop and with helping the controlling feedback, the close loop equations of system were calculated, to simulate nonlinear model of AIDS disease, we used polynomial phasing controller output that was capable to make the parameters of a nonlinear system to follow a sustainable reference model properly.Keywords: polynomial fuzzy, AIDS, nonlinear AIDS model, fuzzy control systems
Procedia PDF Downloads 46829963 Deregulation of Turkish State Railways Based on Public-Private Partnership Approaches
Authors: S. Shakibaei, P. Alpkokin
Abstract:
The railway network is one of the major components of a transportation system in a country which may be an indicator of the country’s level of economic improvement. Since 2000s on, revival of national railways and development of High Speed Rail (HSR) lines are one of the most remarkable policies of Turkish government in railway sector. Within this trend, the railway age is to be revived and coming decades will be a golden opportunity. Indubitably, major infrastructures such as road and railway networks require sizeable investment capital, precise maintenance and reparation. Traditionally, governments are held responsible for funding, operating and maintaining these infrastructures. However, lack or shortage of financial resources, risk responsibilities (particularly cost and time overrun), and in some cases inefficacy in constructional, operational and management phases persuade governments to find alternative options. Financial power, efficient experiences and background of private sector are the factors convincing the governments to make a collaboration with private parties to develop infrastructures. Public-Private Partnerships (PPP or 3P or P3) and related regulatory issues are born considering these collaborations. In Turkey, PPP approaches have attracted attention particularly during last decade and these types of investments have been accelerated by government to overcome budget limitations and cope with inefficacy of public sector in improving transportation network and its operation. This study mainly tends to present a comprehensive overview of PPP concept, evaluate the regulatory procedure in Europe and propose a general framework for Turkish State Railways (TCDD) as an outlook on privatization, liberalization and deregulation of railway network.Keywords: deregulation, high-speed railway, liberalization, privatization, public-private partnership
Procedia PDF Downloads 17029962 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering
Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott
Abstract:
Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.Keywords: cancer research, graph theory, machine learning, single cell analysis
Procedia PDF Downloads 11229961 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc
Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez
Abstract:
The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.Keywords: BLER, LTE, network, qualipoc, SNR.
Procedia PDF Downloads 11429960 Exploring the Role of Media Activity Theory as a Conceptual Basis for Advancing Journalism Education: A Comprehensive Analysis of Its Impact on News Production and Consumption in the Digital Age
Authors: Shohnaza Uzokova Beknazarovna
Abstract:
This research study provides a comprehensive exploration of the Theory of Media Activity and its relevance as a conceptual framework for journalism education. The author offers a thorough review of existing literature on media activity theory, emphasizing its potential to enhance the understanding of the evolving media landscape and its implications for journalism practice. Through a combination of theoretical analysis and practical examples, the paper elucidates the ways in which the Theory of Media Activity can inform and enrich journalism education, particularly in relation to the interactive and participatory nature of contemporary media. The author presents a compelling argument for the integration of media activity theory into journalism curricula, emphasizing its capacity to equip students with a nuanced understanding of the reciprocal relationship between media producers and consumers. Furthermore, the paper discusses the implications of technological advancements on media production and consumption, highlighting the need for journalism educators to prepare students to navigate and contribute to the future of journalism in a rapidly changing media environment. Overall, this research paper offers valuable insights into the potential benefits of embracing the Theory of Media Activity as a foundational framework for journalism education. Its thorough analysis and practical implications make it a valuable resource for educators, researchers, and practitioners seeking to enhance journalism pedagogy in response to the dynamic nature of contemporary media.Keywords: theory of media activity, journalism education, media landscape, media production, media consumption, interactive media, participatory media, technological advancements, media producers, media consumers, journalism practice, contemporary media environment, journalism pedagogy, media theory, media studies
Procedia PDF Downloads 4529959 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks
Authors: Andrew N. Saylor, James R. Peters
Abstract:
Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging
Procedia PDF Downloads 12929958 Damage Identification in Reinforced Concrete Beams Using Modal Parameters and Their Formulation
Authors: Ali Al-Ghalib, Fouad Mohammad
Abstract:
The identification of damage in reinforced concrete structures subjected to incremental cracking performance exploiting vibration data is recognized as a challenging topic in the published and heavily cited literature. Therefore, this paper attempts to shine light on the extent of dynamic methods when applied to reinforced concrete beams simulated with various scenarios of defects. For this purpose, three different reinforced concrete beams are tested through the course of the study. The three beams are loaded statically to failure in incremental successive load cycles and later rehabilitated. After each static load stage, the beams are tested under free-free support condition using experimental modal analysis. The beams were all of the same length and cross-sectional area (2.0x0.14x0.09)m, but they were different in concrete compressive strength and the type of damage presented. The experimental modal parameters as damage identification parameters were showed computationally expensive, time consuming and require substantial inputs and considerable expertise. Nonetheless, they were proved plausible for the condition monitoring of the current case study as well as structural changes in the course of progressive loads. It was accentuated that a satisfactory localization and quantification for structural changes (Level 2 and Level 3 of damage identification problem) can only be achieved reasonably through considering frequencies and mode shapes of a system in a proper analytical model. A convenient post analysis process for various datasets of vibration measurements for the three beams is conducted in order to extract, check and correlate the basic modal parameters; namely, natural frequency, modal damping and mode shapes. The results of the extracted modal parameters and their combination are utilized and discussed in this research as quantification parameters.Keywords: experimental modal analysis, damage identification, structural health monitoring, reinforced concrete beam
Procedia PDF Downloads 26229957 Analysis of Some Produced Inhibitors for Corrosion of J55 Steel in NaCl Solution Saturated with CO₂
Authors: Ambrish Singh
Abstract:
The corrosion inhibition performance of pyran (AP) and benzimidazole (BI) derivatives on J55 steel in 3.5% NaCl solution saturated with CO₂ was investigated by electrochemical, weight loss, surface characterization, and theoretical studies. The electrochemical studies included electrochemical impedance spectroscopy (EIS), potentiodynamic polarization (PDP), electrochemical frequency modulation (EFM), and electrochemical frequency modulation trend (EFMT). Surface characterization was done using contact angle, scanning electron microscopy (SEM), and atomic force microscopy (AFM) techniques. DFT and molecular dynamics (MD) studies were done using Gaussian and Materials Studio softwares. All the studies suggested the good inhibition by the synthesized inhibitors on J55 steel in 3.5% NaCl solution saturated with CO₂ due to the formation of a protective film on the surface. Molecular dynamic simulation was applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface.Keywords: corrosion, inhibitor, EFM, AFM, DFT, MD
Procedia PDF Downloads 10329956 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy
Authors: Kemal Efe Eseller, Göktuğ Yazici
Abstract:
Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing
Procedia PDF Downloads 8629955 Investigation of Single Particle Breakage inside an Impact Mill
Authors: E. Ghasemi Ardi, K. J. Dong, A. B. Yu, R. Y. Yang
Abstract:
In current work, a numerical model based on the discrete element method (DEM) was developed which provided information about particle dynamic and impact event condition inside a laboratory scale impact mill (Fritsch). It showed that each particle mostly experiences three impacts inside the mill. While the first impact frequently happens at front surface of the rotor’s rib, the frequent location of the second impact is side surfaces of the rotor’s rib. It was also showed that while the first impact happens at small impact angle mostly varying around 35º, the second impact happens at around 70º which is close to normal impact condition. Also analyzing impact energy revealed that varying mill speed from 6000 to 14000 rpm, the ratio of first impact’s average impact energy and minimum required energy to break particle (Wₘᵢₙ) increased from 0.30 to 0.85. Moreover, it was seen that second impact poses intense impact energy on particle which can be considered as the main cause of particle splitting. Finally, obtained information from DEM simulation along with obtained data from conducted experiments was implemented in semi-empirical equations in order to find selection and breakage functions. Then, using a back-calculation approach, those parameters were used to predict the PSDs of ground particles under different impact energies. Results were compared with experiment results and showed reasonable accuracy and prediction ability.Keywords: single particle breakage, particle dynamic, population balance model, particle size distribution, discrete element method
Procedia PDF Downloads 28929954 The Usefulness and Usability of a Linkedin Group for the Maintenance of a Community of Practice among Hand Surgeons Worldwide
Authors: Vaikunthan Rajaratnam
Abstract:
Maintaining continuous professional development among clinicians has been a challenge. Hand surgery is a unique speciality with the coming together of orthopaedics, plastics and trauma surgeons. The requirements for a team-based approach to care with the inclusion of other experts such as occupational, physiotherapist and orthotic and prosthetist provide the impetus for the creation of communities of practice. This study analysed the community of practice in hand surgery that was created through a social networking website for professionals. The main objectives were to discover the usefulness of this community of practice created in the platform of the group function of LinkedIn. The second objective was to determine the usability of this platform for the purposes of continuing professional development among members of this community of practice. The methodology used was one of mixed methods which included a quantitative analysis on the usefulness of the social network website as a community of practice, using the analytics provided by the LinkedIn platform. Further qualitative analysis was performed on the various postings that were generated by the community of practice within the social network website. This was augmented by a respondent driven survey conducted online to assess the usefulness of the platform for continuous professional development. A total of 31 respondents were involved in this study. This study has shown that it is possible to create an engaging and interactive community of practice among hand surgeons using the group function of this professional social networking website LinkedIn. Over three years the group has grown significantly with members from multiple regions and has produced engaging and interactive conversations online. From the results of the respondents’ survey, it can be concluded that there was satisfaction of the functionality and that it was an excellent platform for discussions and collaboration in the community of practice with a 69 % of satisfaction. Case-based discussions were the most useful functions of the community of practice. This platform usability was graded as excellent using the validated usability tool. This study has shown that the social networking site LinkedIn’s group function can be easily used as a community of practice effectively and provides convenience to professionals and has made an impact on their practice and better care for patients. It has also shown that this platform was easy to use and has a high level of usability for the average healthcare professional. This platform provided the improved connectivity among professionals involved in hand surgery care which allowed for the community to grow and with proper support and contribution of relevant material by members allowed for a safe environment for the exchange of knowledge and sharing of experience that is the foundation of a community practice.Keywords: community of practice, online community, hand surgery, lifelong learning, LinkedIn, social media, continuing professional development
Procedia PDF Downloads 31429953 Artificial Neural Networks Face to Sudden Load Change for Shunt Active Power Filter
Authors: Dehini Rachid, Ferdi Brahim
Abstract:
The shunt active power filter (SAPF) is not destined only to improve the power factor, but also to compensate the unwanted harmonic currents produced by nonlinear loads. This paper presents a SAPF with identification and control method based on artificial neural network (ANN). To identify harmonics, many techniques are used, among them the conventional p-q theory and the relatively recent one the artificial neural network method. It is difficult to get satisfied identification and control characteristics by using a normal (ANN) due to the nonlinearity of the system (SAPF + fast nonlinear load variations). This work is an attempt to undertake a systematic study of the problem to equip the (SAPF) with the harmonics identification and DC link voltage control method based on (ANN). The latter has been applied to the (SAPF) with fast nonlinear load variations. The results of computer simulations and experiments are given, which can confirm the feasibility of the proposed active power filter.Keywords: artificial neural networks (ANN), p-q theory, harmonics, total harmonic distortion
Procedia PDF Downloads 38529952 The Role of Sustainable Development in the Design and Planning of Smart Cities Using GIS Techniques: Models of Arab Cities
Authors: Ahmed M. Jihad
Abstract:
The paper presents the concept of sustainable development, and the role of geographic techniques in the design, planning and presentation of maps of smart cities with geographical vision, and the identification of programs and tools, and models of maps of Arab cities, is the problem of research in how to apply, process and experience these programs? What is the role of geographic techniques in planning and mapping the optimal place for these cities? The paper proposes an addition to the designs of Iraqi cities, as it can be developed in the future to serve as a model for interactive smart cities by developing its services. The importance of this paper stems from the concept of sustainable development dynamic which has become a method of development imposed by the present era in rapid development to achieve social balance and specialized programs in draw paper argues that ensuring sustainable development is achieved through the use of information technology. The paper will follow the theoretical presentation of the importance of the concept of development, design tools and programs. The paper follows the method of analysis of modern systems (System Analysis Approach) through the latest programs will provide results can be said that the new Iraqi cities can be developed with smart technologies, like some of the Arab and European cities that were newly created through the introduction of international investment, and therefore Plans can be made to select the best programs in manufacturing and producing maps and smart cities in the future.Keywords: geographic techniques, planning the cities, smart cities, sustainable development
Procedia PDF Downloads 20929951 A New Reliability based Channel Allocation Model in Mobile Networks
Authors: Anujendra, Parag Kumar Guha Thakurta
Abstract:
The data transmission between mobile hosts and base stations (BSs) in Mobile networks are often vulnerable to failure. Thus, efficient link connectivity, in terms of the services of both base stations and communication channels of the network, is required in wireless mobile networks to achieve highly reliable data transmission. In addition, it is observed that the number of blocked hosts is increased due to insufficient number of channels during heavy load in the network. Under such scenario, the channels are allocated accordingly to offer a reliable communication at any given time. Therefore, a reliability-based channel allocation model with acceptable system performance is proposed as a MOO problem in this paper. Two conflicting parameters known as Resource Reuse factor (RRF) and the number of blocked calls are optimized under reliability constraint in this problem. The solution to such MOO problem is obtained through NSGA-II (Non-dominated Sorting Genetic Algorithm). The effectiveness of the proposed model in this work is shown with a set of experimental results.Keywords: base station, channel, GA, pareto-optimal, reliability
Procedia PDF Downloads 40829950 Multipurpose Agricultural Robot Platform: Conceptual Design of Control System Software for Autonomous Driving and Agricultural Operations Using Programmable Logic Controller
Authors: P. Abhishesh, B. S. Ryuh, Y. S. Oh, H. J. Moon, R. Akanksha
Abstract:
This paper discusses about the conceptual design and development of the control system software using Programmable logic controller (PLC) for autonomous driving and agricultural operations of Multipurpose Agricultural Robot Platform (MARP). Based on given initial conditions by field analysis and desired agricultural operations, the structural design development of MARP is done using modelling and analysis tool. PLC, being robust and easy to use, has been used to design the autonomous control system of robot platform for desired parameters. The robot is capable of performing autonomous driving and three automatic agricultural operations, viz. hilling, mulching, and sowing of seeds in the respective order. The input received from various sensors on the field is later transmitted to the controller via ZigBee network to make the changes in the control program to get desired field output. The research is conducted to provide assistance to farmers by reducing labor hours for agricultural activities by implementing automation. This study will provide an alternative to the existing systems with machineries attached behind tractors and rigorous manual operations on agricultural field at effective cost.Keywords: agricultural operations, autonomous driving, MARP, PLC
Procedia PDF Downloads 36129949 Characterization of Waste Thermocol Modified Bitumen by Spectroscopy, Microscopic Technique, and Dynamic Shear Rheometer
Authors: Supriya Mahida, Sangita, Yogesh U. Shah, Shanta Kumar
Abstract:
The global production of thermocol increasing day by day, due to vast applications of the use of thermocole in many sectors. Thermocol being non-biodegradable and more toxic than plastic leads towards a number of problems like its management into value-added products, environmental damage and landfill problems due to weight to volume ratio. Utilization of waste thermocol for modification of bitumen binders resulted in waste thermocol modified bitumen (WTMB) used in road construction and maintenance technology. Modification of bituminous mixes through incorporating thermocol into bituminous mixes through a dry process is one of the new options besides recycling process which consumes lots of waste thermocol. This process leads towards waste management and remedies against thermocol waste disposal. The present challenge is to dispose the thermocol waste under different forms in road infrastructure, either through the dry process or wet process to be developed in future. This paper focuses on the use of thermocol wastes which is mixed with VG 10 bitumen in proportions of 0.5%, 1%, 1.5%, and 2% by weight of bitumen. The physical properties of neat bitumen are evaluated and compared with modified VG 10 bitumen having thermocol. Empirical characterization like penetration, softening, and viscosity of bitumen has been carried out. Thermocol and waste thermocol modified bitumen (WTMB) were further analyzed by Fourier Transform Infrared Spectroscopy (FT-IR), field emission scanning electron microscopy (FESEM), and Dynamic Shear Rheometer (DSR).Keywords: DSR, FESEM, FT-IR, thermocol wastes
Procedia PDF Downloads 16529948 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network
Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem
Abstract:
This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.Keywords: electricity price, k-factor GARMA, LLWNN, G-GARCH, forecasting
Procedia PDF Downloads 22829947 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA
Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell
Abstract:
Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis
Procedia PDF Downloads 22929946 Does "R and D" Investment Drive Economic Growth? Evidence from Africa
Authors: Boopen Seetanah, R. V. Sannassee, Sheereen Fauzel, Robin Nunkoo
Abstract:
The bulk of research on the impact of research and development (R&D) has been carried out in developed economies where the intensity of R&D expenditure has been relatively high and stable for many years. However, there is a paucity of similar studies in developing countries. In this paper, we provide empirical estimates of the impact of R&D investment on economic growth in a developing African economy (Mauritius) where R&D expenditure intensity has been low initially, but rising, albeit moderately in recent years. Using a dynamic time series analysis over the period 1980 to 2014 in a Vector Autoregressive framework, R & D is shown to have a positive and significant effect on the economic progress of the island, although the impact is considerably less when compared to both other ingredients of growth and also to reported elasticities fromdeveloped economies . Interestingly, there is evidence of bicausality between R & D and growth. furthermore, R & D positively impacts on both domestic and foreign investment, suggesting the possibilities of indirect effects.Keywords: R & D, VECM, Africa, Mauritius
Procedia PDF Downloads 43529945 Deep Convolutional Neural Network for Detection of Microaneurysms in Retinal Fundus Images at Early Stage
Authors: Goutam Kumar Ghorai, Sandip Sadhukhan, Arpita Sarkar, Debprasad Sinha, G. Sarkar, Ashis K. Dhara
Abstract:
Diabetes mellitus is one of the most common chronic diseases in all countries and continues to increase in numbers significantly. Diabetic retinopathy (DR) is damage to the retina that occurs with long-term diabetes. DR is a major cause of blindness in the Indian population. Therefore, its early diagnosis is of utmost importance towards preventing progression towards imminent irreversible loss of vision, particularly in the huge population across rural India. The barriers to eye examination of all diabetic patients are socioeconomic factors, lack of referrals, poor access to the healthcare system, lack of knowledge, insufficient number of ophthalmologists, and lack of networking between physicians, diabetologists and ophthalmologists. A few diabetic patients often visit a healthcare facility for their general checkup, but their eye condition remains largely undetected until the patient is symptomatic. This work aims to focus on the design and development of a fully automated intelligent decision system for screening retinal fundus images towards detection of the pathophysiology caused by microaneurysm in the early stage of the diseases. Automated detection of microaneurysm is a challenging problem due to the variation in color and the variation introduced by the field of view, inhomogeneous illumination, and pathological abnormalities. We have developed aconvolutional neural network for efficient detection of microaneurysm. A loss function is also developed to handle severe class imbalance due to very small size of microaneurysms compared to background. The network is able to locate the salient region containing microaneurysms in case of noisy images captured by non-mydriatic cameras. The ground truth of microaneurysms is created by expert ophthalmologists for MESSIDOR database as well as private database, collected from Indian patients. The network is trained from scratch using the fundus images of MESSIDOR database. The proposed method is evaluated on DIARETDB1 and the private database. The method is successful in detection of microaneurysms for dilated and non-dilated types of fundus images acquired from different medical centres. The proposed algorithm could be used for development of AI based affordable and accessible system, to provide service at grass root-level primary healthcare units spread across the country to cater to the need of the rural people unaware of the severe impact of DR.Keywords: retinal fundus image, deep convolutional neural network, early detection of microaneurysms, screening of diabetic retinopathy
Procedia PDF Downloads 14029944 From Risk/Security Analysis via Timespace to a Model of Human Vulnerability and Human Security
Authors: Anders Troedsson
Abstract:
For us humans, risk and insecurity are intimately linked to vulnerabilities - where there is vulnerability, there is potentially risk and insecurity. Reducing vulnerability through compensatory measures means decreasing the likelihood of a certain external event be qualified as a risk/threat/assault, and thus also means increasing the individual’s sense of security. The paper suggests that a meaningful way to approach the study of risk/ insecurity is to organize thinking about the vulnerabilities that external phenomena evoke in humans as perceived by them. Such phenomena are, through a set of given vulnerabilities, potentially translated into perceptions of "insecurity." An ontological discussion about salient timespace characteristics of external phenomena as perceived by humans, including such which potentially can be qualified as risk/threat/assault, leads to the positing of two dimensions which are central for describing what in the paper is called the essence of risk/threat/assault. As is argued, such modeling helps analysis steer free of the subjective factor which is intimately connected to human perception and which mediates between phenomena “out there” potentially identified as risk/threat/assault, and their translation into an experience of security or insecurity. A proposed set of universally given vulnerabilities are scrutinized with the help of the two dimensions, resulting in a modeling effort featuring four realms of vulnerabilities which together represent a dynamic whole. This model in turn informs modeling on human security.Keywords: human vulnerabilities, human security, immediate-inert, material-immaterial, timespace
Procedia PDF Downloads 29529943 Tehran Province Water and Wastewater Company Approach on Energy Efficiency by the Development of Renewable Energy to Achieving the Sustainable Development Legal Principle
Authors: Mohammad Parvaresh, Mahdi Babaee, Bahareh Arghand, Roushanak Fahimi Hanzaee, Davood Nourmohammadi
Abstract:
Today, the intelligent network of water and wastewater as one of the key steps in realizing the smart city in the world. Use of pressure relief valves in urban water networks in order to reduce the pressure is necessary in Tehran city. But use these pressure relief valves lead to waste water, more power consumption, and environmental pollution because Tehran Province Water and Wastewater Co. use a quarter of industry 's electricity. In this regard, Tehran Province Water and Wastewater Co. identified solutions to reduce direct and indirect costs in energy use in the process of production, transmission and distribution of water because this company has extensive facilities and high capacity to realize green economy and industry. The aim of this study is to analyze the new project in water and wastewater industry to reach sustainable development.Keywords: Tehran Province Water and Wastewater Company, water network efficiency, sustainable development, International Environmental Law
Procedia PDF Downloads 29029942 Dynamic Distribution Calibration for Improved Few-Shot Image Classification
Authors: Majid Habib Khan, Jinwei Zhao, Xinhong Hei, Liu Jiedong, Rana Shahzad Noor, Muhammad Imran
Abstract:
Deep learning is increasingly employed in image classification, yet the scarcity and high cost of labeled data for training remain a challenge. Limited samples often lead to overfitting due to biased sample distribution. This paper introduces a dynamic distribution calibration method for few-shot learning. Initially, base and new class samples undergo normalization to mitigate disparate feature magnitudes. A pre-trained model then extracts feature vectors from both classes. The method dynamically selects distribution characteristics from base classes (both adjacent and remote) in the embedding space, using a threshold value approach for new class samples. Given the propensity of similar classes to share feature distributions like mean and variance, this research assumes a Gaussian distribution for feature vectors. Subsequently, distributional features of new class samples are calibrated using a corrected hyperparameter, derived from the distribution features of both adjacent and distant base classes. This calibration augments the new class sample set. The technique demonstrates significant improvements, with up to 4% accuracy gains in few-shot classification challenges, as evidenced by tests on miniImagenet and CUB datasets.Keywords: deep learning, computer vision, image classification, few-shot learning, threshold
Procedia PDF Downloads 6429941 A Closed-Loop Design Model for Sustainable Manufacturing by Integrating Forward Design and Reverse Design
Authors: Yuan-Jye Tseng, Yi-Shiuan Chen
Abstract:
In this paper, a new concept of closed-loop design model is presented. The closed-loop design model is developed by integrating forward design and reverse design. Based on this new concept, a closed-loop design model for sustainable manufacturing by integrated evaluation of forward design, reverse design, and green manufacturing using a fuzzy analytic network process is developed. In the design stage of a product, with a given product requirement and objective, there can be different ways to design the detailed components and specifications. Therefore, there can be different design cases to achieve the same product requirement and objective. Thus, in the design evaluation stage, it is required to analyze and evaluate the different design cases. The purpose of this research is to develop a model for evaluating the design cases by integrated evaluation of forward design, reverse design, and green manufacturing models. A fuzzy analytic network process model is presented for integrated evaluation of the criteria in the three models. The comparison matrices for evaluating the criteria in the three groups are established. The total relational values among the three groups represent the total relational effects. In application, a super matrix can be created and the total relational values can be used to evaluate the design cases for decision-making to select the final design case. An example product is demonstrated in this presentation. It shows that the model is useful for integrated evaluation of forward design, reverse design, and green manufacturing to achieve a closed-loop design for sustainable manufacturing objective.Keywords: design evaluation, forward design, reverse design, closed-loop design, supply chain management, closed-loop supply chain, fuzzy analytic network process
Procedia PDF Downloads 67429940 Effect of Variation of Injection Timing on Performance and Emission Characteristics of Compression Ignition Engine: A CFD Approach
Authors: N. Balamurugan, N. V. Mahalakshmi
Abstract:
Compression ignition (CI) engines are known for their high thermal efficiency in comparison with spark-ignited (SI) engines. This makes CI engines a potential candidate for the future prime source of power for transportation sector to reduce greenhouse gas emissions and to shrink carbon footprint. However, CI engines produce high levels of NOx and soot emissions. Conventional methods to reduce NOx and soot emissions often result in the infamous NOx-soot trade-off. The injection parameters are one of the most important factors in the working of CI engines. The engine performance, power output, economy etc., is greatly dependent on the effectiveness of the injection parameters. The injection parameter has their direct impact on combustion process and pollutant formation. The injection parameter’s values are required to be optimised according to the application of the engine. Control of fuel injection mode is one method for reduction of NOx and soot emissions that is achievable. This study aims to assess, compare and analyse the influence of the effect of injection characteristics that is SOI timing studied on combustion and emissions in in-cylinder combustion processes with that of conventional DI Diesel Engine system using the commercial Computational Fluid Dynamic (CFD) package STAR- CD ES-ICE.Keywords: variation of injection timing, compression ignition engine, spark-ignited, Computational Fluid Dynamic
Procedia PDF Downloads 29129939 Images Selection and Best Descriptor Combination for Multi-Shot Person Re-Identification
Authors: Yousra Hadj Hassen, Walid Ayedi, Tarek Ouni, Mohamed Jallouli
Abstract:
To re-identify a person is to check if he/she has been already seen over a cameras network. Recently, re-identifying people over large public cameras networks has become a crucial task of great importance to ensure public security. The vision community has deeply investigated this area of research. Most existing researches rely only on the spatial appearance information from either one or multiple person images. Actually, the real person re-id framework is a multi-shot scenario. However, to efficiently model a person’s appearance and to choose the best samples to remain a challenging problem. In this work, an extensive comparison of descriptors of state of the art associated with the proposed frame selection method is studied. Specifically, we evaluate the samples selection approach using multiple proposed descriptors. We show the effectiveness and advantages of the proposed method by extensive comparisons with related state-of-the-art approaches using two standard datasets PRID2011 and iLIDS-VID.Keywords: camera network, descriptor, model, multi-shot, person re-identification, selection
Procedia PDF Downloads 27729938 A Study of Inter-Media Discourse Construction on Sino-US Trade Friction Based on Network Agenda Setting Theory
Authors: Wanying Xie
Abstract:
Under the background of the increasing Sino-US trade friction, the two nations pay more attention to the medias’ words. This paper mainly studies the causality, effectiveness, and influence of discourse construction between traditional media and social media. Based on the Network Agenda Setting theory, a kind of associative memory pattern in Psychology, who focuses on how media affect audiences’ cognition of issues and attributes, as well as the significance of the relation between people and matters. The date of the sample chosen in this paper ranges from March 23, 2018, to April 30, 2019. A total of 395 Tweets of Donald Trump are obtained, and 731 related reports are collected from the mainstream American newspapers including New York Times, the Washington Post and the Washington Street, by using Factiva and other databases. The sample data are processed by MAXQDA while the media discourses are analyzed by SPSS and Cite Space, with an aim to study: 1) whether the inter-media discourse construction exists; 2) which media (traditional media V.S. social media) is dominant; 3) the causality between two media. The results show: 1) the discourse construction between three American mainstream newspapers and Donald Trump's Twitter is proved in some periods; 2) the dominant position is extremely depended on the events; 3) the causality between two media is decided by many reasons. New media technology shortens the time of agenda-setting effect to one day or less. By comparing the specific relation between the three major American newspapers and Donald Trump’s Twitter, whose popularity and influence could be reflected. Hopefully, this paper could enable readers to have a more comprehensive understanding of the international media language and political environment.Keywords: discourse construction, media language, network agenda-setting theory, sino-us trade friction
Procedia PDF Downloads 25629937 Analysis of Elastic-Plastic Deformation of Reinforced Concrete Shear-Wall Structures under Earthquake Excitations
Authors: Oleg Kabantsev, Karomatullo Umarov
Abstract:
The engineering analysis of earthquake consequences demonstrates a significantly different level of damage to load-bearing systems of different types. Buildings with reinforced concrete columns and separate shear-walls receive the highest level of damage. Traditional methods for predicting damage under earthquake excitations do not provide an answer to the question about the reasons for the increased vulnerability of reinforced concrete frames with shear-walls bearing systems. Thus, the study of the problem of formation and accumulation of damages in the structures reinforced concrete frame with shear-walls requires the use of new methods of assessment of the stress-strain state, as well as new approaches to the calculation of the distribution of forces and stresses in the load-bearing system based on account of various mechanisms of elastic-plastic deformation of reinforced concrete columns and walls. The results of research into the processes of non-linear deformation of structures with a transition to destruction (collapse) will allow to substantiate the characteristics of limit states of various structures forming an earthquake-resistant load-bearing system. The research of elastic-plastic deformation processes of reinforced concrete structures of frames with shear-walls is carried out on the basis of experimentally established parameters of limit deformations of concrete and reinforcement under dynamic excitations. Limit values of deformations are defined for conditions under which local damages of the maximum permissible level are formed in constructions. The research is performed by numerical methods using ETABS software. The research results indicate that under earthquake excitations, plastic deformations of various levels are formed in various groups of elements of the frame with the shear-wall load-bearing system. During the main period of seismic effects in the shear-wall elements of the load-bearing system, there are insignificant volumes of plastic deformations, which are significantly lower than the permissible level. At the same time, plastic deformations are formed in the columns and do not exceed the permissible value. At the final stage of seismic excitations in shear-walls, the level of plastic deformations reaches values corresponding to the plasticity coefficient of concrete , which is less than the maximum permissible value. Such volume of plastic deformations leads to an increase in general deformations of the bearing system. With the specified parameters of the deformation of the shear-walls in concrete columns, plastic deformations exceeding the limiting values develop, which leads to the collapse of such columns. Based on the results presented in this study, it can be concluded that the application seismic-force-reduction factor, common for the all load-bearing system, does not correspond to the real conditions of formation and accumulation of damages in elements of the load-bearing system. Using a single coefficient of seismic-force-reduction factor leads to errors in predicting the seismic resistance of reinforced concrete load-bearing systems. In order to provide the required level of seismic resistance buildings with reinforced concrete columns and separate shear-walls, it is necessary to use values of the coefficient of seismic-force-reduction factor differentiated by types of structural groups.1Keywords: reinforced concrete structures, earthquake excitation, plasticity coefficients, seismic-force-reduction factor, nonlinear dynamic analysis
Procedia PDF Downloads 20329936 A Cloud-Based Spectrum Database Approach for Licensed Shared Spectrum Access
Authors: Hazem Abd El Megeed, Mohamed El-Refaay, Norhan Magdi Osman
Abstract:
Spectrum scarcity is a challenging obstacle in wireless communications systems. It hinders the introduction of innovative wireless services and technologies that require larger bandwidth comparing to legacy technologies. In addition, the current worldwide allocation of radio spectrum bands is already congested and can not afford additional squeezing or optimization to accommodate new wireless technologies. This challenge is a result of accumulative contributions from different factors that will be discussed later in this paper. One of these factors is the radio spectrum allocation policy governed by national regulatory authorities nowadays. The framework for this policy allocates specified portion of radio spectrum to a particular wireless service provider on exclusive utilization basis. This allocation is executed according to technical specification determined by the standard bodies of each Radio Access Technology (RAT). Dynamic access of spectrum is a framework for flexible utilization of radio spectrum resources. In this framework there is no exclusive allocation of radio spectrum and even the public safety agencies can share their spectrum bands according to a governing policy and service level agreements. In this paper, we explore different methods for accessing the spectrum dynamically and its associated implementation challenges.Keywords: licensed shared access, cognitive radio, spectrum sharing, spectrum congestion, dynamic spectrum access, spectrum database, spectrum trading, reconfigurable radio systems, opportunistic spectrum allocation (OSA)
Procedia PDF Downloads 42529935 An Experimental Study of Online Peer-to-Peer Language Learning
Authors: Abrar Al-Hasan
Abstract:
Web 2.0 has significantly increased the amount of information available to users not only about firms and their offerings, but also about the activities of other individuals in their networks and markets. It is widely acknowledged that this increased availability of ‘social’ information, particularly about other individuals, is likely to influence a user’s behavior and choices. However, there are very few systematic studies of how such increased information transparency on the behavior of other users in a focal users’ network influences a focal users’ behavior in the emerging marketplace of online language learning. This study seeks to examine the value and impact of ‘social activities’ – wherein, a user sees and interacts with the learning activities of her peers – on her language learning efficiency. An online experiment in a peer-to-peer language marketplace was conducted to compare the learning efficiency of users with ‘social’ information versus users with no ‘social’ information. The results of this study highlight the impact and importance of ‘social’ information within the language learning context. The study concludes by exploring how these insights may inspire new developments in online education.Keywords: e-Learning, language learning marketplace, peer-to-peer, social network
Procedia PDF Downloads 384