Search results for: convolution neuron network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4874

Search results for: convolution neuron network

2114 On the Use of Analytical Performance Models to Design a High-Performance Active Queue Management Scheme

Authors: Shahram Jamali, Samira Hamed

Abstract:

One of the open issues in Random Early Detection (RED) algorithm is how to set its parameters to reach high performance for the dynamic conditions of the network. Although original RED uses fixed values for its parameters, this paper follows a model-based approach to upgrade performance of the RED algorithm. It models the routers queue behavior by using the Markov model and uses this model to predict future conditions of the queue. This prediction helps the proposed algorithm to make some tunings over RED's parameters and provide efficiency and better performance. Widespread packet level simulations confirm that the proposed algorithm, called Markov-RED, outperforms RED and FARED in terms of queue stability, bottleneck utilization and dropped packets count.

Keywords: active queue management, RED, Markov model, random early detection algorithm

Procedia PDF Downloads 542
2113 Survival Analysis after a First Ischaemic Stroke Event: A Case-Control Study in the Adult Population of England.

Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski

Abstract:

Stroke is associated with a significant risk of morbidity and mortality. There is scarcity of research on the long-term survival after first-ever ischaemic stroke (IS) events in England with regards to effects of different medical therapies and comorbidities. The objective of this study was to model the all-cause mortality after an IS diagnosis in the adult population of England. Using a retrospective case-control design, we extracted the electronic medical records of patients born prior to or in year 1960 in England with a first-ever ischaemic stroke diagnosis from January 1986 to January 2017 within the Health and Improvement Network (THIN) database. Participants with a history of ischaemic stroke were matched to 3 controls by sex and age at diagnosis and general practice. The primary outcome was the all-cause mortality. The hazards of the all-cause mortality were estimated using a Weibull-Cox survival model which included both scale and shape effects and a shared random effect of general practice. The model included sex, birth cohort, socio-economic status, comorbidities and medical therapies. 20,250 patients with a history of IS (cases) and 55,519 controls were followed up to 30 years. From 2008 to 2015, the one-year all-cause mortality for the IS patients declined with an absolute change of -0.5%. Preventive treatments to cases increased considerably over time. These included prescriptions of statins and antihypertensives. However, prescriptions for antiplatelet drugs decreased in the routine general practice since 2010. The survival model revealed a survival benefit of antiplatelet treatment to stroke survivors with hazard ratio (HR) of 0.92 (0.90 – 0.94). IS diagnosis had significant interactions with gender and age at entry and hypertension diagnosis. IS diagnosis was associated with high risk of all-cause mortality with HR= 3.39 (3.05-3.72) for cases compared to controls. Hypertension was associated with poor survival with HR = 4.79 (4.49 - 5.09) for hypertensive cases relative to non-hypertensive controls, though the detrimental effect of hypertension has not reached significance for hypertensive controls, HR = 1.19(0.82-1.56). This study of English primary care data showed that between 2008 and 2015, the rates of prescriptions of stroke preventive treatments increased, and a short-term all-cause mortality after IS stroke declined. However, stroke resulted in poor long-term survival. Hypertension, a modifiable risk factor, was found to be associated with poor survival outcomes in IS patients. Antiplatelet drugs were found to be protective to survival. Better efforts are required to reduce the burden of stroke through health service development and primary prevention.

Keywords: general practice, hazard ratio, health improvement network (THIN), ischaemic stroke, multiple imputation, Weibull-Cox model.

Procedia PDF Downloads 189
2112 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 259
2111 Mechanical Properties and Microstructure of Ultra-High Performance Concrete Containing Fly Ash and Silica Fume

Authors: Jisong Zhang, Yinghua Zhao

Abstract:

The present study investigated the mechanical properties and microstructure of Ultra-High Performance Concrete (UHPC) containing supplementary cementitious materials (SCMs), such as fly ash (FA) and silica fume (SF), and to verify the synergistic effect in the ternary system. On the basis of 30% fly ash replacement, the incorporation of either 10% SF or 20% SF show a better performance compared to the reference sample. The efficiency factor (k-value) was calculated as a synergistic effect to predict the compressive strength of UHPC with these SCMs. The SEM of micrographs and pore volume from BJH method indicate a high correlation with compressive strength. Further, an artificial neural networks model was constructed for prediction of the compressive strength of UHPC containing these SCMs.

Keywords: artificial neural network, fly ash, mechanical properties, ultra-high performance concrete

Procedia PDF Downloads 417
2110 Firm's Growth Leading Dimensions of Blockchain Empowered Information Management System: An Empirical Study

Authors: Umang Varshney, Amit Karamchandani, Rohit Kapoor

Abstract:

Practitioners and researchers have realized that Blockchain is not limited to currency. Blockchain as a distributed ledger can ensure a transparent and traceable supply chain. Due to Blockchain-enabled IoTs, a firm’s information management system can now take inputs from other supply chain partners in real-time. This study aims to provide empirical evidence of dimensions responsible for blockchain implemented firm’s growth and highlight how sector (manufacturing or service), state's regulatory environment, and choice of blockchain network affect the blockchain's usefulness. This post-adoption study seeks to validate the findings of pre-adoption studies done on the blockchain. Data will be collected through a survey of managers working in blockchain implemented firms and analyzed through PLS-SEM.

Keywords: blockchain, information management system, PLS-SEM, firm's growth

Procedia PDF Downloads 127
2109 Research on the Updating Strategy of Public Space in Small Towns in Zhejiang Province under the Background of New-Style Urbanization

Authors: Chen Yao, Wang Ke

Abstract:

Small towns are the most basic administrative institutions in our country, which are connected with cities and rural areas. Small towns play an important role in promoting local urban and rural economic development, providing the main public services and maintaining social stability in social governance. With the vigorous development of small towns and the transformation of industrial structure, the changes of social structure, spatial structure, and lifestyle are lagging behind, causing that the spatial form and landscape style do not belong to both cities and rural areas, and seriously affecting the quality of people’s life space and environment. The rural economy in Zhejiang Province has started, the society and the population are also developing in relative stability. In September 2016, Zhejiang Province set out the 'Technical Guidelines for Comprehensive Environmental Remediation of Small Towns in Zhejiang Province,' so as to comprehensively implement the small town comprehensive environmental remediation with the main content of strengthening the plan and design leading, regulating environmental sanitation, urban order and town appearance. In November 2016, Huzhou City started the comprehensive environmental improvement of small towns, strived to use three years to significantly improve the 115 small towns, as well as to create a number of high quality, distinctive and beautiful towns with features of 'clean and livable, rational layout, industrial development, poetry and painting style'. This paper takes Meixi Town, Zhangwu Town and Sanchuan Village in Huzhou City as the empirical cases, analyzes the small town public space by applying the relative theory of actor-network and space syntax. This paper also analyzes the spatial composition in actor and social structure elements, as well as explores the relationship of actor’s spatial practice and public open space by combining with actor-network theory. This paper introduces the relevant theories and methods of spatial syntax, carries out research analysis and design planning analysis of small town spaces from the perspective of quantitative analysis. And then, this paper proposes the effective updating strategy for the existing problems in public space. Through the planning and design in the building level, the dissonant factors produced by various spatial combination of factors and between landscape design and urban texture during small town development will be solved, inhabitant quality of life will be promoted, and town development vitality will be increased.

Keywords: small towns, urbanization, public space, updating

Procedia PDF Downloads 231
2108 Adaptive Data Approximations Codec (ADAC) for AI/ML-based Cyber-Physical Systems

Authors: Yong-Kyu Jung

Abstract:

The fast growth in information technology has led to de-mands to access/process data. CPSs heavily depend on the time of hardware/software operations and communication over the network (i.e., real-time/parallel operations in CPSs (e.g., autonomous vehicles). Since data processing is an im-portant means to overcome the issue confronting data management, reducing the gap between the technological-growth and the data-complexity and channel-bandwidth. An adaptive perpetual data approximation method is intro-duced to manage the actual entropy of the digital spectrum. An ADAC implemented as an accelerator and/or apps for servers/smart-connected devices adaptively rescales digital contents (avg.62.8%), data processing/access time/energy, encryption/decryption overheads in AI/ML applications (facial ID/recognition).

Keywords: adaptive codec, AI, ML, HPC, cyber-physical, cybersecurity

Procedia PDF Downloads 81
2107 The Management Information System for Convenience Stores: Case Study in 7 Eleven Shop in Bangkok

Authors: Supattra Kanchanopast

Abstract:

The purpose of this research is to develop and design a management information system for 7 eleven shop in Bangkok. The system was designed and developed to meet users’ requirements via the internet network by use of application software such as My SQL for database management, Apache HTTP Server for Web Server and PHP Hypertext Preprocessor for an interface between web server, database and users. The system was designed into two subsystems as the main system, or system for head office, and the branch system for branch shops. These consisted of three parts which are classified by user management as shop management, inventory management and Point of Sale (POS) management. The implementation of the MIS for the mini-mart shop, can lessen the amount of paperwork and reduce repeating tasks so it may decrease the capital of the business and support an extension of branches in the future as well.

Keywords: convenience store, the management information system, inventory management, 7 eleven shop

Procedia PDF Downloads 487
2106 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 188
2105 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 142
2104 ANN Modeling for Cadmium Biosorption from Potable Water Using a Packed-Bed Column Process

Authors: Dariush Jafari, Seyed Ali Jafari

Abstract:

The recommended limit for cadmium concentration in potable water is less than 0.005 mg/L. A continuous biosorption process using indigenous red seaweed, Gracilaria corticata, was performed to remove cadmium from the potable water. The process was conducted under fixed conditions and the breakthrough curves were achieved for three consecutive sorption-desorption cycles. A modeling based on Artificial Neural Network (ANN) was employed to fit the experimental breakthrough data. In addition, a simplified semi empirical model, Thomas, was employed for this purpose. It was found that ANN well described the experimental data (R2>0.99) while the Thomas prediction were a bit less successful with R2>0.97. The adjusted design parameters using the nonlinear form of Thomas model was in a good agreement with the experimentally obtained ones. The results approve the capability of ANN to predict the cadmium concentration in potable water.

Keywords: ANN, biosorption, cadmium, packed-bed, potable water

Procedia PDF Downloads 435
2103 Aerobic Bioprocess Control Using Artificial Intelligence Techniques

Authors: M. Caramihai, Irina Severin

Abstract:

This paper deals with the design of an intelligent control structure for a bioprocess of Hansenula polymorpha yeast cultivation. The objective of the process control is to produce biomass in a desired physiological state. The work demonstrates that the designed Hybrid Control Techniques (HCT) are able to recognize specific evolution bioprocess trajectories using neural networks trained specifically for this purpose, in order to estimate the model parameters and to adjust the overall bioprocess evolution through an expert system and a fuzzy structure. The design of the control algorithm as well as its tuning through realistic simulations is presented. Taking into consideration the synergism of different paradigms like fuzzy logic, neural network, and symbolic artificial intelligence (AI), in this paper we present a real and fulfilled intelligent control architecture with application in bioprocess control.

Keywords: bioprocess, intelligent control, neural nets, fuzzy structure, hybrid techniques

Procedia PDF Downloads 425
2102 Mathematical Modeling and Algorithms for the Capacitated Facility Location and Allocation Problem with Emission Restriction

Authors: Sagar Hedaoo, Fazle Baki, Ahmed Azab

Abstract:

In supply chain management, network design for scalable manufacturing facilities is an emerging field of research. Facility location allocation assigns facilities to customers to optimize the overall cost of the supply chain. To further optimize the costs, capacities of these facilities can be changed in accordance with customer demands. A mathematical model is formulated to fully express the problem at hand and to solve small-to-mid range instances. A dedicated constraint has been developed to restrict emissions in line with the Kyoto protocol. This problem is NP-Hard; hence, a simulated annealing metaheuristic has been developed to solve larger instances. A case study on the USA-Canada cross border crossing is used.

Keywords: emission, mixed integer linear programming, metaheuristic, simulated annealing

Procedia PDF Downloads 312
2101 Pollutant Dispersion in Coastal Waters

Authors: Sonia Ben Hamza, Sabra Habli, Nejla Mahjoub Saïd, Hervé Bournot, Georges Le Palec

Abstract:

This paper spots light on the effect of a point source pollution on streams, stemming out from intentional release caused by unconscious facts. The consequences of such contamination on ecosystems are very serious. Accordingly, effective tools are highly demanded in this respect, which enable us to come across an accurate progress of pollutant and anticipate different measures to be applied in order to limit the degradation of the environmental surrounding. In this context, we are eager to model a pollutant dispersion of a free surface flow which is ejected by an outfall sewer of an urban sewerage network in coastal water taking into account the influence of climatic parameters on the spread of pollutant. Numerical results showed that pollutant dispersion is merely due to the presence of vortices and turbulence. Hence, it was realized that the pollutant spread in seawater is strongly correlated with climatic conditions in this region.

Keywords: coastal waters, numerical simulation, pollutant dispersion, turbulent flows

Procedia PDF Downloads 515
2100 Black-Box-Base Generic Perturbation Generation Method under Salient Graphs

Authors: Dingyang Hu, Dan Liu

Abstract:

DNN (Deep Neural Network) deep learning models are widely used in classification, prediction, and other task scenarios. To address the difficulties of generic adversarial perturbation generation for deep learning models under black-box conditions, a generic adversarial ingestion generation method based on a saliency map (CJsp) is proposed to obtain salient image regions by counting the factors that influence the input features of an image on the output results. This method can be understood as a saliency map attack algorithm to obtain false classification results by reducing the weights of salient feature points. Experiments also demonstrate that this method can obtain a high success rate of migration attacks and is a batch adversarial sample generation method.

Keywords: adversarial sample, gradient, probability, black box

Procedia PDF Downloads 107
2099 Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features

Authors: Tharini N. de Silva, Xiao Zhibo, Zhao Rui, Mao Kezhi

Abstract:

Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks.

Keywords: causal realtion extraction, relation extracton, convolutional neural network, text representation

Procedia PDF Downloads 741
2098 Intelligent Rescheduling Trains for Air Pollution Management

Authors: Kainat Affrin, P. Reshma, G. Narendra Kumar

Abstract:

Optimization of timetable is the need of the day for the rescheduling and routing of trains in real time. Trains are scheduled in parallel with the road transport vehicles to the same destination. As the number of trains is restricted due to single track, customers usually opt for road transport to use frequently. The air pollution increases as the density of vehicles on road transport is increased. Use of an alternate mode of transport like train helps in reducing air-pollution. This paper mainly aims at attracting the passengers to Train transport by proper rescheduling of trains using hybrid of stop-skip algorithm and iterative convex programming algorithm. Rescheduling of train bi-directionally is achieved on a single track with dynamic dual time and varying stops. Introduction of more trains attract customers to use rail transport frequently, thereby decreasing the pollution. The results are simulated using Network Simulator (NS-2).

Keywords: air pollution, AODV, re-scheduling, WSNs

Procedia PDF Downloads 363
2097 Conservation Challenges of Wetlands Biodiversity in Northeast Region of Bangladesh

Authors: Anisuzzaman Khan, A. J. K. Masud

Abstract:

Bangladesh is the largest delta in the world predominantly comprising large network of rives and wetlands. Wetlands in Bangladesh are represented by inland freshwater, estuarine brakishwater and tidal salt-water coastal wetlands. Bangladesh possesses enormous area of wetlands including rivers and streams, freshwater lakes and marshes, haors, baors, beels, water storage reservoirs, fish ponds, flooded cultivated fields and estuarine systems with extensive mangrove swamps. The past, present, and future of Bangladesh, and its people’s livelihoods are intimately connected to its relationship with water and wetlands. More than 90% of the country’s total area consists of alluvial plains, crisscrossed by a complex network of rivers and their tributaries. Floodplains, beels (low-lying depressions in the floodplain), haors (deep depression) and baors (oxbow lakes) represent the inland freshwater wetlands. Over a third of Bangladesh could be termed as wetlands, considering rivers, estuaries, mangroves, floodplains, beels, baors and haors. The country’s wetland ecosystems also offer critical habitats for globally significant biological diversity. Of these the deeply flooded basins of north-east Bangladesh, known as haors, are a habitat of wide range of wild flora and fauna unique to Bangladesh. The haor basin lies within the districts of Sylhet, Sunamgonj, Netrokona, Kishoregonj, Habigonj, Moulvibazar, and Brahmanbaria in the Northeast region of Bangladesh comprises the floodplains of the Meghna tributaries and is characterized by the presence of numerous large, deeply flooded depressions, known as haors. It covers about around 8,568 km2 area of Bangladesh. The topography of the region is steep at around foothills in the north and slopes becoming mild and milder gradually at downstream towards south. Haor is a great reservoir of aquatic biological resources and acts as the ecological safety net to the nature as well as to the dwellers of the haor. But in reality, these areas are considered as wastelands and to make these wastelands into a productive one, a one sided plan has been implementing since long. The programme is popularly known as Flood Control, Drainage and Irrigation (FCDI) which is mainly devoted to increase the monoculture rice production. However, haor ecosystem is a multiple-resource base which demands an integrated sustainable development approach. The ongoing management approach is biased to only rice production through FCDI. Thus this primitive mode of action is diminishing other resources having more economic potential ever thought.

Keywords: freshwater wetlands, biological diversity, biological resources, conservation and sustainable development

Procedia PDF Downloads 333
2096 Image Compression Using Block Power Method for SVD Decomposition

Authors: El Asnaoui Khalid, Chawki Youness, Aksasse Brahim, Ouanan Mohammed

Abstract:

In these recent decades, the important and fast growth in the development and demand of multimedia products is contributing to an insufficient in the bandwidth of device and network storage memory. Consequently, the theory of data compression becomes more significant for reducing the data redundancy in order to save more transfer and storage of data. In this context, this paper addresses the problem of the lossless and the near-lossless compression of images. This proposed method is based on Block SVD Power Method that overcomes the disadvantages of Matlab's SVD function. The experimental results show that the proposed algorithm has a better compression performance compared with the existing compression algorithms that use the Matlab's SVD function. In addition, the proposed approach is simple and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression.

Keywords: image compression, SVD, block SVD power method, lossless compression, near lossless

Procedia PDF Downloads 389
2095 Emerging Virtual Linguistic Landscape Created by Members of Language Community in TikTok

Authors: Kai Zhu, Shanhua He, Yujiao Chang

Abstract:

This paper explores the virtual linguistic landscape of an emerging virtual language community in TikTok, a language community realizing immediate and non-immediate communication without a precise Spatio-temporal domain or a specific socio-cultural boundary or interpersonal network. This kind of language community generates a large number and various forms of virtual linguistic landscape, with which we conducted a virtual ethnographic survey together with telephone interviews to collect data from coping. We have been following two language communities in TikTok for several months so that we can illustrate the composition of the two language communities and some typical virtual language landscapes in both language communities first. Then we try to explore the reasons why and how they are formed through the organization, transcription, and analysis of the interviews. Our analysis reveals the richness and diversity of the virtual linguistic landscape, and finally, we summarize some of the characteristics of this language community.

Keywords: virtual linguistic landscape, virtual language community, virtual ethnographic survey, TikTok

Procedia PDF Downloads 108
2094 The Effect of Feature Selection on Pattern Classification

Authors: Chih-Fong Tsai, Ya-Han Hu

Abstract:

The aim of feature selection (or dimensionality reduction) is to filter out unrepresentative features (or variables) making the classifier perform better than the one without feature selection. Since there are many well-known feature selection algorithms, and different classifiers based on different selection results may perform differently, very few studies consider examining the effect of performing different feature selection algorithms on the classification performances by different classifiers over different types of datasets. In this paper, two widely used algorithms, which are the genetic algorithm (GA) and information gain (IG), are used to perform feature selection. On the other hand, three well-known classifiers are constructed, which are the CART decision tree (DT), multi-layer perceptron (MLP) neural network, and support vector machine (SVM). Based on 14 different types of datasets, the experimental results show that in most cases IG is a better feature selection algorithm than GA. In addition, the combinations of IG with DT and IG with SVM perform best and second best for small and large scale datasets.

Keywords: data mining, feature selection, pattern classification, dimensionality reduction

Procedia PDF Downloads 671
2093 Heart-Rate Resistance Electrocardiogram Identification Based on Slope-Oriented Neural Networks

Authors: Tsu-Wang Shen, Shan-Chun Chang, Chih-Hsien Wang, Te-Chao Fang

Abstract:

For electrocardiogram (ECG) biometrics system, it is a tedious process to pre-install user’s high-intensity heart rate (HR) templates in ECG biometric systems. Based on only resting enrollment templates, it is a challenge to identify human by using ECG with the high-intensity HR caused from exercises and stress. This research provides a heartbeat segment method with slope-oriented neural networks against the ECG morphology changes due to high intensity HRs. The method has overall system accuracy at 97.73% which includes six levels of HR intensities. A cumulative match characteristic curve is also used to compare with other traditional ECG biometric methods.

Keywords: high-intensity heart rate, heart rate resistant, ECG human identification, decision based artificial neural network

Procedia PDF Downloads 437
2092 Development of a BriMAIN System for Health Monitoring of Railway Bridges

Authors: Prakher Mishra, Dikshant Bodana, Saloni Desai, Sudhanshu Dixit, Sopan Agarwal, Shriraj Patel

Abstract:

Railways are sometimes lifeline of nations as they consist of huge network of rail lines and bridges. Reportedly many of the bridges are aging, weak, distressed and accident prone. It becomes a really challenging task for Engineers and workers to keep up a regular maintenance schedule for proper functioning which itself is quite a hard hitting job. In this paper we have come up with an innvovative wireless system of maintenance called BriMAIN. In this system we have installed two types of sensors, first one is called a force sensor which will continously analyse the readings of pressure at joints of the bridges and secondly an MPU-6050 triaxial gyroscope+accelerometer which will analyse the deflection of the deck of the bridge. Apart from this a separate database is also being made at the server room so that the data can be visualized by the engineers and a warning can be issued in case reading of the sensors goes above threshold.

Keywords: Accelerometer, B-MAIN, Gyroscope, MPU-6050

Procedia PDF Downloads 385
2091 Efficient Neural and Fuzzy Models for the Identification of Dynamical Systems

Authors: Aouiche Abdelaziz, Soudani Mouhamed Salah, Aouiche El Moundhe

Abstract:

The present paper addresses the utilization of Artificial Neural Networks (ANNs) and Fuzzy Inference Systems (FISs) for the identification and control of dynamical systems with some degree of uncertainty. Because ANNs and FISs have an inherent ability to approximate functions and to adapt to changes in input and parameters, they can be used to control systems too complex for linear controllers. In this work, we show how ANNs and FISs can be put in order to form nets that can learn from external data. In sequence, it is presented structures of inputs that can be used along with ANNs and FISs to model non-linear systems. Four systems were used to test the identification and control of the structures proposed. The results show the ANNs and FISs (Back Propagation Algorithm) used were efficient in modeling and controlling the non-linear plants.

Keywords: non-linear systems, fuzzy set Models, neural network, control law

Procedia PDF Downloads 214
2090 Evaluation of Parameters of Subject Models and Their Mutual Effects

Authors: A. G. Kovalenko, Y. N. Amirgaliyev, A. U. Kalizhanova, L. S. Balgabayeva, A. H. Kozbakova, Z. S. Aitkulov

Abstract:

It is known that statistical information on operation of the compound multisite system is often far from the description of actual state of the system and does not allow drawing any conclusions about the correctness of its operation. For example, from the world practice of operation of systems of water supply, water disposal, it is known that total measurements at consumers and at suppliers differ between 40-60%. It is connected with mathematical measure of inaccuracy as well as ineffective running of corresponding systems. Analysis of widely-distributed systems is more difficult, in which subjects, which are self-maintained in decision-making, carry out economic interaction in production, act of purchase and sale, resale and consumption. This work analyzed mathematical models of sellers, consumers, arbitragers and the models of their interaction in the provision of dispersed single-product market of perfect competition. On the basis of these models, the methods, allowing estimation of every subject’s operating options and systems as a whole are given.

Keywords: dispersed systems, models, hydraulic network, algorithms

Procedia PDF Downloads 288
2089 Financial Assets Return, Economic Factors and Investor's Behavioral Indicators Relationships Modeling: A Bayesian Networks Approach

Authors: Nada Souissi, Mourad Mroua

Abstract:

The main purpose of this study is to examine the interaction between financial asset volatility, economic factors and investor's behavioral indicators related to both the company's and the markets stocks for the period from January 2000 to January2020. Using multiple linear regression and Bayesian Networks modeling, results show a positive and negative relationship between investor's psychology index, economic factors and predicted stock market return. We reveal that the application of the Bayesian Discrete Network contributes to identify the different cause and effect relationships between all economic, financial variables and psychology index.

Keywords: Financial asset return predictability, Economic factors, Investor's psychology index, Bayesian approach, Probabilistic networks, Parametric learning

Procedia PDF Downloads 153
2088 Dynamic EEG Desynchronization in Response to Vicarious Pain

Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy

Abstract:

The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.

Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition

Procedia PDF Downloads 284
2087 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 209
2086 A Formal Microlectic Framework for Biological Circularchy

Authors: Ellis D. Cooper

Abstract:

“Circularchy” is supposed to be an adjustable formal framework with enough expressive power to articulate biological theory about Earthly Life in the sense of multi-scale biological autonomy constrained by non-equilibrium thermodynamics. “Formal framework” means specifically a multi-sorted first-order-theorywithequality (for each sort). Philosophically, such a theory is one kind of “microlect,” which means a “way of speaking” (or, more generally, a “way of behaving”) for overtly expressing a “mental model” of some “referent.” Other kinds of microlect include “natural microlect,” “diagrammatic microlect,” and “behavioral microlect,” with examples such as “political theory,” “Euclidean geometry,” and “dance choreography,” respectively. These are all describable in terms of a vocabulary conforming to grammar. As aspects of human culture, they are possibly reminiscent of Ernst Cassirer’s idea of “symbolic form;” as vocabularies, they are akin to Richard Rorty’s idea of “final vocabulary” for expressing a mental model of one’s life. A formal microlect is presented by stipulating sorts, variables, calculations, predicates, and postulates. Calculations (a.k.a., “terms”) may be composed to form more complicated calculations; predicates (a.k.a., “relations”) may be logically combined to form more complicated predicates; and statements (a.k.a., “sentences”) are grammatically correct expressions which are true or false. Conclusions are statements derived using logical rules of deduction from postulates, other assumed statements, or previously derived conclusions. A circularchy is a formal microlect constituted by two or more sub-microlects, each with its distinct stipulations of sorts, variables, calculations, predicates, and postulates. Within a sub-microlect some postulates or conclusions are equations which are statements that declare equality of specified calculations. An equational bond between an equation in one sub-microlect and an equation in either the same sub-microlect or in another sub-microlect is a predicate that declares equality of symbols occurring in a side of one equation with symbols occurring in a side of the other equation. Briefly, a circularchy is a network of equational bonds between sub-microlects. A circularchy is solvable if there exist solutions for all equations that satisfy all equational bonds. If a circularchy is not solvable, then a challenge would be to discover the obstruction to solvability and then conjecture what adjustments might remove the obstruction. Adjustment means changes in stipulated ingredients (sorts, etc.) of sub-microlects, or changes in equational bonds between sub-microlects, or introduction of new sub-microlects and new equational bonds. A circularchy is modular insofar as each sub-microlect is a node in a network of equation bonds. Solvability of a circularchy may be conjectured. Efforts to prove solvability may be thwarted by a counter-example or may lead to the construction of a solution. An automated theorem-proof assistant would likely be necessary for investigating a substantial circularchy, such as one purported to represent Earthly Life. Such investigations (chains of statements) would be concurrent with and no substitute for simulations (chains of numbers).

Keywords: autonomy, first-order theory, mathematics, thermodynamics

Procedia PDF Downloads 222
2085 Cerium Salt Effect in 70s Bioactive Glass

Authors: Alessandra N. Santos, Max P. Ferreira, Alexandra R. P. Silva, Agda A. R. de Oliveira, Marivalda M. Pereira

Abstract:

The literature describes experiments, in which ceria nanoparticles in the bioactive glass significantly improve differentiation of stem cells into osteoblasts and increase production of collagen. It is not known whether this effect observed due to the presence of nanoceria can be also observed in the presence of cerium in the bioactive glass network. The effect of cerium into bioactive glasses using the sol–gel route is the focus of this work, with the goal to develop a material for tissue engineering with the potential to enhance osteogenesis. A bioactive glass composition based on 70% SiO2–30% CaO is produced with the addition of cerium. The analyses XRD, FTIR, SEM/EDS, BET/BJH, in vitro bioactivity test and the Cell viability assay were performed. The results show that cerium remains in the bioactive glass structure. The obtained material present in vitro bioactivity and promote the cell viability.

Keywords: bioactive glass, bioactivity, cerium salt, material characterization, sol-gel method

Procedia PDF Downloads 236