Search results for: heterogeneous combat network
2203 Research on the Updating Strategy of Public Space in Small Towns in Zhejiang Province under the Background of New-Style Urbanization
Abstract:
Small towns are the most basic administrative institutions in our country, which are connected with cities and rural areas. Small towns play an important role in promoting local urban and rural economic development, providing the main public services and maintaining social stability in social governance. With the vigorous development of small towns and the transformation of industrial structure, the changes of social structure, spatial structure, and lifestyle are lagging behind, causing that the spatial form and landscape style do not belong to both cities and rural areas, and seriously affecting the quality of people’s life space and environment. The rural economy in Zhejiang Province has started, the society and the population are also developing in relative stability. In September 2016, Zhejiang Province set out the 'Technical Guidelines for Comprehensive Environmental Remediation of Small Towns in Zhejiang Province,' so as to comprehensively implement the small town comprehensive environmental remediation with the main content of strengthening the plan and design leading, regulating environmental sanitation, urban order and town appearance. In November 2016, Huzhou City started the comprehensive environmental improvement of small towns, strived to use three years to significantly improve the 115 small towns, as well as to create a number of high quality, distinctive and beautiful towns with features of 'clean and livable, rational layout, industrial development, poetry and painting style'. This paper takes Meixi Town, Zhangwu Town and Sanchuan Village in Huzhou City as the empirical cases, analyzes the small town public space by applying the relative theory of actor-network and space syntax. This paper also analyzes the spatial composition in actor and social structure elements, as well as explores the relationship of actor’s spatial practice and public open space by combining with actor-network theory. This paper introduces the relevant theories and methods of spatial syntax, carries out research analysis and design planning analysis of small town spaces from the perspective of quantitative analysis. And then, this paper proposes the effective updating strategy for the existing problems in public space. Through the planning and design in the building level, the dissonant factors produced by various spatial combination of factors and between landscape design and urban texture during small town development will be solved, inhabitant quality of life will be promoted, and town development vitality will be increased.Keywords: small towns, urbanization, public space, updating
Procedia PDF Downloads 2302202 Adaptive Data Approximations Codec (ADAC) for AI/ML-based Cyber-Physical Systems
Authors: Yong-Kyu Jung
Abstract:
The fast growth in information technology has led to de-mands to access/process data. CPSs heavily depend on the time of hardware/software operations and communication over the network (i.e., real-time/parallel operations in CPSs (e.g., autonomous vehicles). Since data processing is an im-portant means to overcome the issue confronting data management, reducing the gap between the technological-growth and the data-complexity and channel-bandwidth. An adaptive perpetual data approximation method is intro-duced to manage the actual entropy of the digital spectrum. An ADAC implemented as an accelerator and/or apps for servers/smart-connected devices adaptively rescales digital contents (avg.62.8%), data processing/access time/energy, encryption/decryption overheads in AI/ML applications (facial ID/recognition).Keywords: adaptive codec, AI, ML, HPC, cyber-physical, cybersecurity
Procedia PDF Downloads 802201 The Management Information System for Convenience Stores: Case Study in 7 Eleven Shop in Bangkok
Authors: Supattra Kanchanopast
Abstract:
The purpose of this research is to develop and design a management information system for 7 eleven shop in Bangkok. The system was designed and developed to meet users’ requirements via the internet network by use of application software such as My SQL for database management, Apache HTTP Server for Web Server and PHP Hypertext Preprocessor for an interface between web server, database and users. The system was designed into two subsystems as the main system, or system for head office, and the branch system for branch shops. These consisted of three parts which are classified by user management as shop management, inventory management and Point of Sale (POS) management. The implementation of the MIS for the mini-mart shop, can lessen the amount of paperwork and reduce repeating tasks so it may decrease the capital of the business and support an extension of branches in the future as well.Keywords: convenience store, the management information system, inventory management, 7 eleven shop
Procedia PDF Downloads 4862200 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 1872199 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets
Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar
Abstract:
Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).Keywords: coupled effect, heat transfer, sink, solid rocket motors, source
Procedia PDF Downloads 2232198 Impact of the Dog-Technic for D1-D4 and Longitudinal Stroke Technique for Diaphragm on Peak Expiratory Flow (PEF) in Asthmatic Patients
Authors: Victoria Eugenia Garnacho-Garnacho, Elena Sonsoles Rodriguez-Lopez, Raquel Delgado-Delgado, Alvaro Otero-Campos, Jesus Guodemar-Perez, Angelo Michelle Vagali, Juan Pablo Hervas-Perez
Abstract:
Asthma is a heterogeneous disease which has always had a drug treatment. Osteopathic treatment that we propose is aimed, seen through a dorsal manipulation (Dog Technic D1-D4) and a technique for diaphragm (Longitudinal Stroke) forced expiratory flow in spirometry changes there are in particular that there is an increase in the volumes of the Peak Flow and Post intervention and effort and that the application of these two techniques together is more powerful if we applied only a Longitudinal (Stroke). Also rating if this type of treatment will have repercussions on breathlessness, a very common symptom in asthma. And finally to investigate if provided vertebra pain decreased after a manipulation. Methods—Participants were recruited between students and professors of the University, aged 18-65, patients (n = 18) were assigned randomly to one of the two groups, group 1 (longitudinal Stroke and manipulation dorsal Dog Technic) and group 2 (diaphragmatic technique, Longitudinal Stroke). The statistical analysis is characterized by the comparison of the main indicator of obstruction of via area PEF (peak expiratory flow) in various situations through the peak flow meter Datospir Peak-10. The measurements were carried out in four phases: at rest, after the stress test, after the treatment, after treatment and the stress test. After each stress test was evaluated, through the Borg scale, the level of Dyspnea on each patient, regardless of the group. In Group 1 in addition to these parameters was calculated using an algometer spinous pain before and after the manipulation. All data were taken at the minute. Results—12 Group 1 (Dog Technic and Longitudinal Stroke) patients responded positively to treatment, there was an increase of 5.1% and 6.1% of the post-treatment PEF and post-treatment, and effort. The results of the scale of Borg by which we measure the level of Dyspnea were positive, a 54.95%, patients noted an improvement in breathing. In addition was confirmed through the means of both groups group 1 in which two techniques were applied was 34.05% more effective than group 2 in which applied only a. After handling pain fell by 38% of the cases. Conclusions—The impact of the technique of Dog-Technic for D1-D4 and the Longitudinal Stroke technique for diaphragm in the volumes of peak expiratory flow (PEF) in asthmatic patients were positive, there was a change of the PEF Post intervention and post-treatment, and effort and showed the most effective group in which only a technique was applied. Furthermore this type of treatment decreased facilitated vertebrae pain and was efficient in the improvement of Dyspnea and the general well-being of the patient.Keywords: ANS, asthma, manipulation, manual therapy, osteopathic
Procedia PDF Downloads 2882197 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series
Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold
Abstract:
To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network
Procedia PDF Downloads 1412196 ANN Modeling for Cadmium Biosorption from Potable Water Using a Packed-Bed Column Process
Authors: Dariush Jafari, Seyed Ali Jafari
Abstract:
The recommended limit for cadmium concentration in potable water is less than 0.005 mg/L. A continuous biosorption process using indigenous red seaweed, Gracilaria corticata, was performed to remove cadmium from the potable water. The process was conducted under fixed conditions and the breakthrough curves were achieved for three consecutive sorption-desorption cycles. A modeling based on Artificial Neural Network (ANN) was employed to fit the experimental breakthrough data. In addition, a simplified semi empirical model, Thomas, was employed for this purpose. It was found that ANN well described the experimental data (R2>0.99) while the Thomas prediction were a bit less successful with R2>0.97. The adjusted design parameters using the nonlinear form of Thomas model was in a good agreement with the experimentally obtained ones. The results approve the capability of ANN to predict the cadmium concentration in potable water.Keywords: ANN, biosorption, cadmium, packed-bed, potable water
Procedia PDF Downloads 4312195 Aerobic Bioprocess Control Using Artificial Intelligence Techniques
Authors: M. Caramihai, Irina Severin
Abstract:
This paper deals with the design of an intelligent control structure for a bioprocess of Hansenula polymorpha yeast cultivation. The objective of the process control is to produce biomass in a desired physiological state. The work demonstrates that the designed Hybrid Control Techniques (HCT) are able to recognize specific evolution bioprocess trajectories using neural networks trained specifically for this purpose, in order to estimate the model parameters and to adjust the overall bioprocess evolution through an expert system and a fuzzy structure. The design of the control algorithm as well as its tuning through realistic simulations is presented. Taking into consideration the synergism of different paradigms like fuzzy logic, neural network, and symbolic artificial intelligence (AI), in this paper we present a real and fulfilled intelligent control architecture with application in bioprocess control.Keywords: bioprocess, intelligent control, neural nets, fuzzy structure, hybrid techniques
Procedia PDF Downloads 4242194 Mathematical Modeling and Algorithms for the Capacitated Facility Location and Allocation Problem with Emission Restriction
Authors: Sagar Hedaoo, Fazle Baki, Ahmed Azab
Abstract:
In supply chain management, network design for scalable manufacturing facilities is an emerging field of research. Facility location allocation assigns facilities to customers to optimize the overall cost of the supply chain. To further optimize the costs, capacities of these facilities can be changed in accordance with customer demands. A mathematical model is formulated to fully express the problem at hand and to solve small-to-mid range instances. A dedicated constraint has been developed to restrict emissions in line with the Kyoto protocol. This problem is NP-Hard; hence, a simulated annealing metaheuristic has been developed to solve larger instances. A case study on the USA-Canada cross border crossing is used.Keywords: emission, mixed integer linear programming, metaheuristic, simulated annealing
Procedia PDF Downloads 3102193 Pollutant Dispersion in Coastal Waters
Authors: Sonia Ben Hamza, Sabra Habli, Nejla Mahjoub Saïd, Hervé Bournot, Georges Le Palec
Abstract:
This paper spots light on the effect of a point source pollution on streams, stemming out from intentional release caused by unconscious facts. The consequences of such contamination on ecosystems are very serious. Accordingly, effective tools are highly demanded in this respect, which enable us to come across an accurate progress of pollutant and anticipate different measures to be applied in order to limit the degradation of the environmental surrounding. In this context, we are eager to model a pollutant dispersion of a free surface flow which is ejected by an outfall sewer of an urban sewerage network in coastal water taking into account the influence of climatic parameters on the spread of pollutant. Numerical results showed that pollutant dispersion is merely due to the presence of vortices and turbulence. Hence, it was realized that the pollutant spread in seawater is strongly correlated with climatic conditions in this region.Keywords: coastal waters, numerical simulation, pollutant dispersion, turbulent flows
Procedia PDF Downloads 5142192 Black-Box-Base Generic Perturbation Generation Method under Salient Graphs
Authors: Dingyang Hu, Dan Liu
Abstract:
DNN (Deep Neural Network) deep learning models are widely used in classification, prediction, and other task scenarios. To address the difficulties of generic adversarial perturbation generation for deep learning models under black-box conditions, a generic adversarial ingestion generation method based on a saliency map (CJsp) is proposed to obtain salient image regions by counting the factors that influence the input features of an image on the output results. This method can be understood as a saliency map attack algorithm to obtain false classification results by reducing the weights of salient feature points. Experiments also demonstrate that this method can obtain a high success rate of migration attacks and is a batch adversarial sample generation method.Keywords: adversarial sample, gradient, probability, black box
Procedia PDF Downloads 1052191 Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features
Authors: Tharini N. de Silva, Xiao Zhibo, Zhao Rui, Mao Kezhi
Abstract:
Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks.Keywords: causal realtion extraction, relation extracton, convolutional neural network, text representation
Procedia PDF Downloads 7362190 Intelligent Rescheduling Trains for Air Pollution Management
Authors: Kainat Affrin, P. Reshma, G. Narendra Kumar
Abstract:
Optimization of timetable is the need of the day for the rescheduling and routing of trains in real time. Trains are scheduled in parallel with the road transport vehicles to the same destination. As the number of trains is restricted due to single track, customers usually opt for road transport to use frequently. The air pollution increases as the density of vehicles on road transport is increased. Use of an alternate mode of transport like train helps in reducing air-pollution. This paper mainly aims at attracting the passengers to Train transport by proper rescheduling of trains using hybrid of stop-skip algorithm and iterative convex programming algorithm. Rescheduling of train bi-directionally is achieved on a single track with dynamic dual time and varying stops. Introduction of more trains attract customers to use rail transport frequently, thereby decreasing the pollution. The results are simulated using Network Simulator (NS-2).Keywords: air pollution, AODV, re-scheduling, WSNs
Procedia PDF Downloads 3612189 Conservation Challenges of Wetlands Biodiversity in Northeast Region of Bangladesh
Authors: Anisuzzaman Khan, A. J. K. Masud
Abstract:
Bangladesh is the largest delta in the world predominantly comprising large network of rives and wetlands. Wetlands in Bangladesh are represented by inland freshwater, estuarine brakishwater and tidal salt-water coastal wetlands. Bangladesh possesses enormous area of wetlands including rivers and streams, freshwater lakes and marshes, haors, baors, beels, water storage reservoirs, fish ponds, flooded cultivated fields and estuarine systems with extensive mangrove swamps. The past, present, and future of Bangladesh, and its people’s livelihoods are intimately connected to its relationship with water and wetlands. More than 90% of the country’s total area consists of alluvial plains, crisscrossed by a complex network of rivers and their tributaries. Floodplains, beels (low-lying depressions in the floodplain), haors (deep depression) and baors (oxbow lakes) represent the inland freshwater wetlands. Over a third of Bangladesh could be termed as wetlands, considering rivers, estuaries, mangroves, floodplains, beels, baors and haors. The country’s wetland ecosystems also offer critical habitats for globally significant biological diversity. Of these the deeply flooded basins of north-east Bangladesh, known as haors, are a habitat of wide range of wild flora and fauna unique to Bangladesh. The haor basin lies within the districts of Sylhet, Sunamgonj, Netrokona, Kishoregonj, Habigonj, Moulvibazar, and Brahmanbaria in the Northeast region of Bangladesh comprises the floodplains of the Meghna tributaries and is characterized by the presence of numerous large, deeply flooded depressions, known as haors. It covers about around 8,568 km2 area of Bangladesh. The topography of the region is steep at around foothills in the north and slopes becoming mild and milder gradually at downstream towards south. Haor is a great reservoir of aquatic biological resources and acts as the ecological safety net to the nature as well as to the dwellers of the haor. But in reality, these areas are considered as wastelands and to make these wastelands into a productive one, a one sided plan has been implementing since long. The programme is popularly known as Flood Control, Drainage and Irrigation (FCDI) which is mainly devoted to increase the monoculture rice production. However, haor ecosystem is a multiple-resource base which demands an integrated sustainable development approach. The ongoing management approach is biased to only rice production through FCDI. Thus this primitive mode of action is diminishing other resources having more economic potential ever thought.Keywords: freshwater wetlands, biological diversity, biological resources, conservation and sustainable development
Procedia PDF Downloads 3302188 Image Compression Using Block Power Method for SVD Decomposition
Authors: El Asnaoui Khalid, Chawki Youness, Aksasse Brahim, Ouanan Mohammed
Abstract:
In these recent decades, the important and fast growth in the development and demand of multimedia products is contributing to an insufficient in the bandwidth of device and network storage memory. Consequently, the theory of data compression becomes more significant for reducing the data redundancy in order to save more transfer and storage of data. In this context, this paper addresses the problem of the lossless and the near-lossless compression of images. This proposed method is based on Block SVD Power Method that overcomes the disadvantages of Matlab's SVD function. The experimental results show that the proposed algorithm has a better compression performance compared with the existing compression algorithms that use the Matlab's SVD function. In addition, the proposed approach is simple and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression.Keywords: image compression, SVD, block SVD power method, lossless compression, near lossless
Procedia PDF Downloads 3882187 Emerging Virtual Linguistic Landscape Created by Members of Language Community in TikTok
Authors: Kai Zhu, Shanhua He, Yujiao Chang
Abstract:
This paper explores the virtual linguistic landscape of an emerging virtual language community in TikTok, a language community realizing immediate and non-immediate communication without a precise Spatio-temporal domain or a specific socio-cultural boundary or interpersonal network. This kind of language community generates a large number and various forms of virtual linguistic landscape, with which we conducted a virtual ethnographic survey together with telephone interviews to collect data from coping. We have been following two language communities in TikTok for several months so that we can illustrate the composition of the two language communities and some typical virtual language landscapes in both language communities first. Then we try to explore the reasons why and how they are formed through the organization, transcription, and analysis of the interviews. Our analysis reveals the richness and diversity of the virtual linguistic landscape, and finally, we summarize some of the characteristics of this language community.Keywords: virtual linguistic landscape, virtual language community, virtual ethnographic survey, TikTok
Procedia PDF Downloads 1052186 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process
Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa
Abstract:
Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas
Procedia PDF Downloads 922185 The Effect of Feature Selection on Pattern Classification
Authors: Chih-Fong Tsai, Ya-Han Hu
Abstract:
The aim of feature selection (or dimensionality reduction) is to filter out unrepresentative features (or variables) making the classifier perform better than the one without feature selection. Since there are many well-known feature selection algorithms, and different classifiers based on different selection results may perform differently, very few studies consider examining the effect of performing different feature selection algorithms on the classification performances by different classifiers over different types of datasets. In this paper, two widely used algorithms, which are the genetic algorithm (GA) and information gain (IG), are used to perform feature selection. On the other hand, three well-known classifiers are constructed, which are the CART decision tree (DT), multi-layer perceptron (MLP) neural network, and support vector machine (SVM). Based on 14 different types of datasets, the experimental results show that in most cases IG is a better feature selection algorithm than GA. In addition, the combinations of IG with DT and IG with SVM perform best and second best for small and large scale datasets.Keywords: data mining, feature selection, pattern classification, dimensionality reduction
Procedia PDF Downloads 6692184 Heart-Rate Resistance Electrocardiogram Identification Based on Slope-Oriented Neural Networks
Authors: Tsu-Wang Shen, Shan-Chun Chang, Chih-Hsien Wang, Te-Chao Fang
Abstract:
For electrocardiogram (ECG) biometrics system, it is a tedious process to pre-install user’s high-intensity heart rate (HR) templates in ECG biometric systems. Based on only resting enrollment templates, it is a challenge to identify human by using ECG with the high-intensity HR caused from exercises and stress. This research provides a heartbeat segment method with slope-oriented neural networks against the ECG morphology changes due to high intensity HRs. The method has overall system accuracy at 97.73% which includes six levels of HR intensities. A cumulative match characteristic curve is also used to compare with other traditional ECG biometric methods.Keywords: high-intensity heart rate, heart rate resistant, ECG human identification, decision based artificial neural network
Procedia PDF Downloads 4362183 Development of a BriMAIN System for Health Monitoring of Railway Bridges
Authors: Prakher Mishra, Dikshant Bodana, Saloni Desai, Sudhanshu Dixit, Sopan Agarwal, Shriraj Patel
Abstract:
Railways are sometimes lifeline of nations as they consist of huge network of rail lines and bridges. Reportedly many of the bridges are aging, weak, distressed and accident prone. It becomes a really challenging task for Engineers and workers to keep up a regular maintenance schedule for proper functioning which itself is quite a hard hitting job. In this paper we have come up with an innvovative wireless system of maintenance called BriMAIN. In this system we have installed two types of sensors, first one is called a force sensor which will continously analyse the readings of pressure at joints of the bridges and secondly an MPU-6050 triaxial gyroscope+accelerometer which will analyse the deflection of the deck of the bridge. Apart from this a separate database is also being made at the server room so that the data can be visualized by the engineers and a warning can be issued in case reading of the sensors goes above threshold.Keywords: Accelerometer, B-MAIN, Gyroscope, MPU-6050
Procedia PDF Downloads 3842182 Efficient Neural and Fuzzy Models for the Identification of Dynamical Systems
Authors: Aouiche Abdelaziz, Soudani Mouhamed Salah, Aouiche El Moundhe
Abstract:
The present paper addresses the utilization of Artificial Neural Networks (ANNs) and Fuzzy Inference Systems (FISs) for the identification and control of dynamical systems with some degree of uncertainty. Because ANNs and FISs have an inherent ability to approximate functions and to adapt to changes in input and parameters, they can be used to control systems too complex for linear controllers. In this work, we show how ANNs and FISs can be put in order to form nets that can learn from external data. In sequence, it is presented structures of inputs that can be used along with ANNs and FISs to model non-linear systems. Four systems were used to test the identification and control of the structures proposed. The results show the ANNs and FISs (Back Propagation Algorithm) used were efficient in modeling and controlling the non-linear plants.Keywords: non-linear systems, fuzzy set Models, neural network, control law
Procedia PDF Downloads 2132181 Evaluation of Parameters of Subject Models and Their Mutual Effects
Authors: A. G. Kovalenko, Y. N. Amirgaliyev, A. U. Kalizhanova, L. S. Balgabayeva, A. H. Kozbakova, Z. S. Aitkulov
Abstract:
It is known that statistical information on operation of the compound multisite system is often far from the description of actual state of the system and does not allow drawing any conclusions about the correctness of its operation. For example, from the world practice of operation of systems of water supply, water disposal, it is known that total measurements at consumers and at suppliers differ between 40-60%. It is connected with mathematical measure of inaccuracy as well as ineffective running of corresponding systems. Analysis of widely-distributed systems is more difficult, in which subjects, which are self-maintained in decision-making, carry out economic interaction in production, act of purchase and sale, resale and consumption. This work analyzed mathematical models of sellers, consumers, arbitragers and the models of their interaction in the provision of dispersed single-product market of perfect competition. On the basis of these models, the methods, allowing estimation of every subject’s operating options and systems as a whole are given.Keywords: dispersed systems, models, hydraulic network, algorithms
Procedia PDF Downloads 2842180 Financial Assets Return, Economic Factors and Investor's Behavioral Indicators Relationships Modeling: A Bayesian Networks Approach
Authors: Nada Souissi, Mourad Mroua
Abstract:
The main purpose of this study is to examine the interaction between financial asset volatility, economic factors and investor's behavioral indicators related to both the company's and the markets stocks for the period from January 2000 to January2020. Using multiple linear regression and Bayesian Networks modeling, results show a positive and negative relationship between investor's psychology index, economic factors and predicted stock market return. We reveal that the application of the Bayesian Discrete Network contributes to identify the different cause and effect relationships between all economic, financial variables and psychology index.Keywords: Financial asset return predictability, Economic factors, Investor's psychology index, Bayesian approach, Probabilistic networks, Parametric learning
Procedia PDF Downloads 1512179 Dynamic EEG Desynchronization in Response to Vicarious Pain
Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy
Abstract:
The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition
Procedia PDF Downloads 2842178 Budgetary Performance Model for Managing Pavement Maintenance
Authors: Vivek Hokam, Vishrut Landge
Abstract:
An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.Keywords: budget, maintenance, deterioration, priority
Procedia PDF Downloads 2082177 A Formal Microlectic Framework for Biological Circularchy
Authors: Ellis D. Cooper
Abstract:
“Circularchy” is supposed to be an adjustable formal framework with enough expressive power to articulate biological theory about Earthly Life in the sense of multi-scale biological autonomy constrained by non-equilibrium thermodynamics. “Formal framework” means specifically a multi-sorted first-order-theorywithequality (for each sort). Philosophically, such a theory is one kind of “microlect,” which means a “way of speaking” (or, more generally, a “way of behaving”) for overtly expressing a “mental model” of some “referent.” Other kinds of microlect include “natural microlect,” “diagrammatic microlect,” and “behavioral microlect,” with examples such as “political theory,” “Euclidean geometry,” and “dance choreography,” respectively. These are all describable in terms of a vocabulary conforming to grammar. As aspects of human culture, they are possibly reminiscent of Ernst Cassirer’s idea of “symbolic form;” as vocabularies, they are akin to Richard Rorty’s idea of “final vocabulary” for expressing a mental model of one’s life. A formal microlect is presented by stipulating sorts, variables, calculations, predicates, and postulates. Calculations (a.k.a., “terms”) may be composed to form more complicated calculations; predicates (a.k.a., “relations”) may be logically combined to form more complicated predicates; and statements (a.k.a., “sentences”) are grammatically correct expressions which are true or false. Conclusions are statements derived using logical rules of deduction from postulates, other assumed statements, or previously derived conclusions. A circularchy is a formal microlect constituted by two or more sub-microlects, each with its distinct stipulations of sorts, variables, calculations, predicates, and postulates. Within a sub-microlect some postulates or conclusions are equations which are statements that declare equality of specified calculations. An equational bond between an equation in one sub-microlect and an equation in either the same sub-microlect or in another sub-microlect is a predicate that declares equality of symbols occurring in a side of one equation with symbols occurring in a side of the other equation. Briefly, a circularchy is a network of equational bonds between sub-microlects. A circularchy is solvable if there exist solutions for all equations that satisfy all equational bonds. If a circularchy is not solvable, then a challenge would be to discover the obstruction to solvability and then conjecture what adjustments might remove the obstruction. Adjustment means changes in stipulated ingredients (sorts, etc.) of sub-microlects, or changes in equational bonds between sub-microlects, or introduction of new sub-microlects and new equational bonds. A circularchy is modular insofar as each sub-microlect is a node in a network of equation bonds. Solvability of a circularchy may be conjectured. Efforts to prove solvability may be thwarted by a counter-example or may lead to the construction of a solution. An automated theorem-proof assistant would likely be necessary for investigating a substantial circularchy, such as one purported to represent Earthly Life. Such investigations (chains of statements) would be concurrent with and no substitute for simulations (chains of numbers).Keywords: autonomy, first-order theory, mathematics, thermodynamics
Procedia PDF Downloads 2212176 Cerium Salt Effect in 70s Bioactive Glass
Authors: Alessandra N. Santos, Max P. Ferreira, Alexandra R. P. Silva, Agda A. R. de Oliveira, Marivalda M. Pereira
Abstract:
The literature describes experiments, in which ceria nanoparticles in the bioactive glass significantly improve differentiation of stem cells into osteoblasts and increase production of collagen. It is not known whether this effect observed due to the presence of nanoceria can be also observed in the presence of cerium in the bioactive glass network. The effect of cerium into bioactive glasses using the sol–gel route is the focus of this work, with the goal to develop a material for tissue engineering with the potential to enhance osteogenesis. A bioactive glass composition based on 70% SiO2–30% CaO is produced with the addition of cerium. The analyses XRD, FTIR, SEM/EDS, BET/BJH, in vitro bioactivity test and the Cell viability assay were performed. The results show that cerium remains in the bioactive glass structure. The obtained material present in vitro bioactivity and promote the cell viability.Keywords: bioactive glass, bioactivity, cerium salt, material characterization, sol-gel method
Procedia PDF Downloads 2342175 Empirical and Indian Automotive Equity Portfolio Decision Support
Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu
Abstract:
A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.Keywords: Indian automotive sector, stock market decisions, equity portfolio analysis, decision tree classifiers, statistical data analysis
Procedia PDF Downloads 4862174 ANFIS Approach for Locating Faults in Underground Cables
Authors: Magdy B. Eteiba, Wael Ismael Wahba, Shimaa Barakat
Abstract:
This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system. Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.Keywords: ANFIS, fault location, underground cable, wavelet transform
Procedia PDF Downloads 515