Search results for: Fuzzy Analytic Network Process
3599 Data Rate Based Grouping Scheme for Cooperative Communications in Wireless LANs
Authors: Sunmyeng Kim
Abstract:
IEEE 802.11a/b/g standards provide multiple transmission rates, which can be changed dynamically according to the channel condition. Cooperative communications were introduced to improve the overall performance of wireless LANs with the help of relay nodes with higher transmission rates. The cooperative communications are based on the fact that the transmission is much faster when sending data packets to a destination node through a relay node with higher transmission rate, rather than sending data directly to the destination node at low transmission rate. To apply the cooperative communications in wireless LAN, several MAC protocols have been proposed. Some of them can result in collisions among relay nodes in a dense network. In order to solve this problem, we propose a new protocol. Relay nodes are grouped based on their transmission rates. And then, relay nodes only in the highest group try to get channel access. Performance evaluation is conducted using simulation, and shows that the proposed protocol significantly outperforms the previous protocol in terms of throughput and collision probability.
Keywords: Cooperative communications, MAC protocol, relay node, WLAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19063598 Robust Heart Sounds Segmentation Based on the Variation of the Phonocardiogram Curve Length
Authors: Mecheri Zeid Belmecheri, Maamar Ahfir, Izzet Kale
Abstract:
Automatic cardiac auscultation is still a subject of research in order to establish an objective diagnosis. Recorded heart sounds as Phonocardiogram (PCG) signals can be used for automatic segmentation into components that have clinical meanings. These are the first sound, S1, the second sound, S2, and the systolic and diastolic components, respectively. In this paper, an automatic method is proposed for the robust segmentation of heart sounds. This method is based on calculating an intermediate sawtooth-shaped signal from the length variation of the recorded PCG signal in the time domain and, using its positive derivative function that is a binary signal in training a Recurrent Neural Network (RNN). Results obtained in the context of a large database of recorded PCGs with their simultaneously recorded Electrocardiograms (ECGs) from different patients in clinical settings, including normal and abnormal subjects, show on average a segmentation testing performance average of 76% sensitivity and 94% specificity.
Keywords: Heart sounds, PCG segmentation, event detection, Recurrent Neural Networks, PCG curve length.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3233597 Robot Movement Using the Trust Region Policy Optimization
Authors: Romisaa Ali
Abstract:
The Policy Gradient approach is a subset of the Deep Reinforcement Learning (DRL) combines Deep Neural Networks (DNN) with Reinforcement Learning (RL). This approach finds the optimal policy of robot movement, based on the experience it gains from interaction with its environment. Unlike previous policy gradient algorithms, which were unable to handle the two types of error variance and bias introduced by the DNN model due to over- or underestimation, this algorithm is capable of handling both types of error variance and bias. This article will discuss the state-of-the-art SOTA policy gradient technique, trust region policy optimization (TRPO), by applying this method in various environments compared to another policy gradient method, the Proximal Policy Optimization (PPO), to explain their robust optimization, using this SOTA to gather experience data during various training phases after observing the impact of hyper-parameters on neural network performance.
Keywords: Deep neural networks, deep reinforcement learning, Proximal Policy Optimization, state-of-the-art, trust region policy optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853596 Ensuring Data Security and Consistency in FTIMA - A Fault Tolerant Infrastructure for Mobile Agents
Authors: Umar Manzoor, Kiran Ijaz, Wajiha Shamim, Arshad Ali Shahid
Abstract:
Transaction management is one of the most crucial requirements for enterprise application development which often require concurrent access to distributed data shared amongst multiple application / nodes. Transactions guarantee the consistency of data records when multiple users or processes perform concurrent operations. Existing Fault Tolerance Infrastructure for Mobile Agents (FTIMA) provides a fault tolerant behavior in distributed transactions and uses multi-agent system for distributed transaction and processing. In the existing FTIMA architecture, data flows through the network and contains personal, private or confidential information. In banking transactions a minor change in the transaction can cause a great loss to the user. In this paper we have modified FTIMA architecture to ensure that the user request reaches the destination server securely and without any change. We have used triple DES for encryption/ decryption and MD5 algorithm for validity of message.Keywords: Distributed Transaction, Security, Mobile Agents, FTIMA Architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15253595 Trends in Competitiveness of the Thai Printing Industry
Authors: Amon Lasomboon
Abstract:
Since the world printing industry has to confront globalization with a constant change, the Thai printing industry, as a small but increasingly significant part of the world printing industry, cannot inevitably escape but has to encounter with the similar change and also the need to revamp its production processes, designs and technology to make them more appealing to both international and domestic market. The essential question is what is the Thai competitive edge in the printing industry in changing environment? This research is aimed to study the Thai level of competitive edge in terms of marketing, technology, environment friendly, and the level of satisfaction of the process of using printing machines. To access the extent to which is the trends in competitiveness of Thai printing industry, both quantitative and qualitative study were conducted. The quantitative analysis was restricted to 100 respondents. The qualitative analysis was restricted to a focus group of 10 individuals from various backgrounds in the Thai printing industry. The findings from the quantitative analysis revealed that the overall mean scores are 4.53, 4.10, and 3.50 for the competitiveness of marketing, the competitiveness of technology, and the competitiveness of being environment friendly respectively. However, the level of satisfaction for the process of using machines has a mean score only 3.20. The findings from the qualitative analysis have revealed that target customers have increasingly reordered due to their contentment in both low prices and the acceptable quality of the products. Moreover, the Thai printing industry has a tendency to convert to ambient green technology which is friendly to the environment. The Thai printing industry is choosing to produce or substitute with products that are less damaging to the environment. It is also found that the Thai printing industry has been transformed into a very competitive industry which bargaining power rests on consumers who have a variety of choices.Keywords: Competitiveness, Printing Industry
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23213594 A Prototype of Augmented Reality for Visualising Large Sensors’ Datasets
Authors: Folorunso Olufemi Ayinde, Mohd Shahrizal Sunar, Sarudin Kari, Dzulkifli Mohamad
Abstract:
In this paper we discuss the development of an Augmented Reality (AR) - based scientific visualization system prototype that supports identification, localisation, and 3D visualisation of oil leakages sensors datasets. Sensors generates significant amount of multivariate datasets during normal and leak situations. Therefore we have developed a data model to effectively manage such data and enhance the computational support needed for the effective data explorations. A challenge of this approach is to reduce the data inefficiency powered by the disparate, repeated, inconsistent and missing attributes of most available sensors datasets. To handle this challenge, this paper aim to develop an AR-based scientific visualization interface which automatically identifies, localise and visualizes all necessary data relevant to a particularly selected region of interest (ROI) along the virtual pipeline network. Necessary system architectural supports needed as well as the interface requirements for such visualizations are also discussed in this paper.
Keywords: Sensor Leakages Datasets, Augmented Reality, Sensor Data-Model, Scientific Visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16803593 A Characterized and Optimized Approach for End-to-End Delay Constrained QoS Routing
Authors: P.S.Prakash, S.Selvan
Abstract:
QoS Routing aims to find paths between senders and receivers satisfying the QoS requirements of the application which efficiently using the network resources and underlying routing algorithm to be able to find low-cost paths that satisfy given QoS constraints. The problem of finding least-cost routing is known to be NP hard or complete and some algorithms have been proposed to find a near optimal solution. But these heuristics or algorithms either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we analyzed two algorithms namely Characterized Delay Constrained Routing (CDCR) and Optimized Delay Constrained Routing (ODCR). The CDCR algorithm dealt an approach for delay constrained routing that captures the trade-off between cost minimization and risk level regarding the delay constraint. The ODCR which uses an adaptive path weight function together with an additional constraint imposed on the path cost, to restrict search space and hence ODCR finds near optimal solution in much quicker time.Keywords: QoS, Delay, Routing, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12743592 A Reconfigurable Microstrip Patch Antenna with Polyphase Filter for Polarization Diversity and Cross Polarization Filtering Operation
Authors: Lakhdar Zaid, Albane Sangiovanni
Abstract:
A reconfigurable microstrip patch antenna with polyphase filter for polarization diversity and cross polarization filtering operation is presented in this paper. In our approach, a polyphase filter is used to obtain the four 90° phase shift outputs to feed a square microstrip patch antenna. The antenna can be switched between four states of polarization in transmission as well as in receiving mode. Switches are interconnected with the polyphase filter network to produce left-hand circular polarization, right-hand circular polarization, horizontal linear polarization, and vertical linear polarization. Additional advantage of using polyphase filter is its filtering capability for cross polarization filtering in right-hand circular polarization and left-hand circular polarization operation. The theoretical and simulated results demonstrated that polyphase filter is a good candidate to drive microstrip patch antenna to accomplish polarization diversity and cross polarization filtering operation.
Keywords: Microstrip patch antenna, polyphase filter, circular polarization, linear polarization, reconfigurable antenna.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14423591 Optimal Transmission Network Usage and Loss Allocation Using Matrices Methodology and Cooperative Game Theory
Authors: Baseem Khan, Ganga Agnihotri
Abstract:
Restructuring of Electricity supply industry introduced many issues such as transmission pricing, transmission loss allocation and congestion management. Many methodologies and algorithms were proposed for addressing these issues. In this paper a power flow tracing based method is proposed which involves Matrices methodology for the transmission usage and loss allocation for generators and demands. This method provides loss allocation in a direct way because all the computation is previously done for usage allocation. The proposed method is simple and easy to implement in a large power system. Further it is less computational because it requires matrix inversion only a single time. After usage and loss allocation cooperative game theory is applied to results for finding efficient economic signals. Nucleolus and Shapely value approach is used for optimal allocation of results. Results are shown for the IEEE 6 bus system and IEEE 14 bus system.
Keywords: Modified Kirchhoff Matrix, Power flow tracing, Transmission Pricing, Transmission Loss Allocation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25943590 Single Phase Fluid Flow in Series of Microchannel Connected via Converging-Diverging Section with or without Throat
Authors: Abhishek Kumar Chandra, Kaushal Kishor, Wasim Khan, Dhananjay Singh, M. S. Alam
Abstract:
Single phase fluid flow through series of uniform microchannels connected via transition section (converging-diverging section with or without throat) was analytically and numerically studied to characterize the flow within the channel and in the transition sections. Three sets of microchannels of diameters 100, 184, and 249 μm were considered for investigation. Each set contains 10 numbers of microchannels of length 20 mm, connected to each other in series via transition sections. Transition section consists of either converging-diverging section with throat or without throat. The effect of non-uniformity in microchannels on pressure drop was determined by passing water/air through the set of channels for Reynolds number 50 to 1000. Compressibility and rarefaction effects in transition sections were also tested analytically and numerically for air flow. The analytical and numerical results show that these configurations can be used in enhancement of transport processes. However, converging-diverging section without throat shows superior performance over with throat configuration.Keywords: Contraction-expansion flow, integrated microchannel, microchannel network, single phase flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9093589 Spatial-Temporal Awareness Approach for Extensive Re-Identification
Authors: Tyng-Rong Roan, Fuji Foo, Wenwey Hseush
Abstract:
Recent development of AI and edge computing plays a critical role to capture meaningful events such as detection of an unattended bag. One of the core problems is re-identification across multiple CCTVs. Immediately following the detection of a meaningful event is to track and trace the objects related to the event. In an extensive environment, the challenge becomes severe when the number of CCTVs increases substantially, imposing difficulties in achieving high accuracy while maintaining real-time performance. The algorithm that re-identifies cross-boundary objects for extensive tracking is referred to Extensive Re-Identification, which emphasizes the issues related to the complexity behind a great number of CCTVs. The Spatial-Temporal Awareness approach challenges the conventional thinking and concept of operations which is labor intensive and time consuming. The ability to perform Extensive Re-Identification through a multi-sensory network provides the next-level insights – creating value beyond traditional risk management.
Keywords: Long-short-term memory, re-identification, security critical application, spatial-temporal awareness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5333588 Platform Urbanism: Planning towards Hyper-Personalisation
Authors: Provides Ng
Abstract:
Platform economy is a peer-to-peer model of distributing resources facilitated by community-based digital platforms. In recent years, digital platforms are rapidly reconfiguring the public realm using hyper-personalisation techniques. This paper aims at investigating how urban planning can leapfrog into the digital age to help relieve the rising tension of the global issue of labour flow; it discusses the means to transfer techniques of hyper-personalisation into urban planning for plasticity using platform technologies. This research first denotes the limitations of the current system of urban residency, where the system maintains itself on the circulation of documents, which are data on paper. Then, this paper tabulates how some of the institutions around the world, both public and private, digitise data, and streamline communications between a network of systems and citizens using platform technologies. Subsequently, this paper proposes ways in which hyper-personalisation can be utilised to form a digital planning platform. Finally, this paper concludes by reviewing how the proposed strategy may help to open up new ways of thinking about how we affiliate ourselves with cities.
Keywords: Platform urbanism, hyper-personalisation, urban residency, digital data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6693587 Solid Waste Pollution and the Importance of Environmental Planning in Managing and Preserving the Public Environment in Benghazi City and Its Surrounding Areas
Authors: Abdelsalam Omran Gebril Ali
Abstract:
Pollution and solid waste are the most important environmental problems plaguing the city of Benghazi as well as other cities and towns in Libya. These problems are caused by the lack of environmental planning and sound environmental management. Environmental planning is very important at present for the development of projects that preserve the environment; therefore, the planning process should be prioritized over the management process. Pollution caused by poor planning and environmental management exists not only in Benghazi but also in all other Libyan cities. This study was conducted through various field visits to several neighborhoods and areas within Benghazi as well as its neighboring regions. Follow-ups in these areas were conducted from March 2013 to October 2013 as documented by photographs. The existing methods of waste collection and means of transportation were investigated. Interviews were conducted with relevant authorities, including the Environment Public Authority in Benghazi and the Public Service Company of Benghazi. The objective of this study is to determine the causes of solid waste pollution in Benghazi City and its surrounding areas. Results show that solid waste pollution in Benghazi and its surrounding areas is the result of poor planning and environmental management, population growth, and the lack of hardware and equipment for the collection and transport of waste from the city to the landfill site. One of the most important recommendations in this study is the development of a complete and comprehensive plan that includes environmental planning and environmental management to reduce solid waste pollution.
Keywords: Solid waste, pollution, environmental planning, management, Benghazi, Libya.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64303586 Automatic Detection of Syllable Repetition in Read Speech for Objective Assessment of Stuttered Disfluencies
Authors: K. M. Ravikumar, Balakrishna Reddy, R. Rajagopal, H. C. Nagaraj
Abstract:
Automatic detection of syllable repetition is one of the important parameter in assessing the stuttered speech objectively. The existing method which uses artificial neural network (ANN) requires high levels of agreement as prerequisite before attempting to train and test ANNs to separate fluent and nonfluent. We propose automatic detection method for syllable repetition in read speech for objective assessment of stuttered disfluencies which uses a novel approach and has four stages comprising of segmentation, feature extraction, score matching and decision logic. Feature extraction is implemented using well know Mel frequency Cepstra coefficient (MFCC). Score matching is done using Dynamic Time Warping (DTW) between the syllables. The Decision logic is implemented by Perceptron based on the score given by score matching. Although many methods are available for segmentation, in this paper it is done manually. Here the assessment by human judges on the read speech of 10 adults who stutter are described using corresponding method and the result was 83%.Keywords: Assessment, DTW, MFCC, Objective, Perceptron, Stuttering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28123585 Removal of Cationic Heavy Metal and HOC from Soil-Washed Water Using Activated Carbon
Authors: Chi Kyu Ahn, Young Mi Kim, Seung Han Woo, Jong Moon Park
Abstract:
Soil washing process with a surfactant solution is a potential technology for the rapid removal of hydrophobic organic compound (HOC) from soil. However, large amount of washed water would be produced during operation and this should be treated effectively by proper methods. The soil washed water for complex contaminated site with HOC and heavy metals might contain high amount of pollutants such as HOC and heavy metals as well as used surfactant. The heavy metals in the soil washed water have toxic effects on microbial activities thus these should be removed from the washed water before proceeding to a biological waste-water treatment system. Moreover, the used surfactant solutions are necessary to be recovered for reducing the soil washing operation cost. In order to simultaneously remove the heavy metals and HOC from soil-washed water, activated carbon (AC) was used in the present study. In an anionic-nonionic surfactant mixed solution, the Cd(II) and phenanthrene (PHE) were effectively removed by adsorption on activated carbon. The removal efficiency for Cd(II) was increased from 0.027 mmol-Cd/g-AC to 0.142 mmol-Cd/g-AC as the mole ratio of SDS increased in the presence of PHE. The adsorptive capacity of PHE was also increased according to the SDS mole ratio due to the decrement of molar solubilization ratios (MSR) for PHE in an anionic-nonionic surfactant mixture. The simultaneous adsorption of HOC and cationic heavy metals using activated carbon could be a useful method for surfactant recovery and the reduction of heavy metal toxicity in a surfactant-enhanced soil washing process.
Keywords: Activated carbon, Anionic-nonionic surfactant mixture, Cationic heavy metal, HOC, Soil washing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17313584 Evaluation of NH3-Slip from Diesel Vehicles Equipped with Selective Catalytic Reduction Systems by Neural Networks Approach
Authors: Mona Lisa M. Oliveira, Nara A. Policarpo, Ana Luiza B. P. Barros, Carla A. Silva
Abstract:
Selective catalytic reduction systems for nitrogen oxides reduction by ammonia has been the chosen technology by most of diesel vehicle (i.e. bus and truck) manufacturers in Brazil, as also in Europe. Furthermore, at some conditions, over-stoichiometric ammonia availability is also needed that increases the NH3 slips even more. Ammonia (NH3) by this vehicle exhaust aftertreatment system provides a maximum efficiency of NOx removal if a significant amount of NH3 is stored on its catalyst surface. In the other words, the practice shows that slightly less than 100% of the NOx conversion is usually targeted, so that the aqueous urea solution hydrolyzes to NH3 via other species formation, under relatively low temperatures. This paper presents a model based on neural networks integrated with a road vehicle simulator that allows to estimate NH3-slip emission factors for different driving conditions and patterns. The proposed model generates high NH3slips which are not also limited in Brazil, but more efforts needed to be made to elucidate the contribution of vehicle-emitted NH3 to the urban atmosphere.Keywords: Ammonia slip, neural-network, vehicles emissions, SCR-NOx.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10423583 The Model of the Genre of Literary Portrait in Modern Literary Criticism
Authors: B. K. Bazylova, Zh. D. Suleimenova
Abstract:
In modern literary criticism the problem of genre is one of discussion. Genre is a phenomenon, located in the intersection of the synchronous and diachronic processes in the development of literature, and this is due to the complexity of its solutions. It defines the place of contact between literary works and literary process.
Keywords: Literary, criticism, literary portrait.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20383582 Contention Window Adjustment in IEEE 802.11-Based Industrial Wireless Networks
Authors: Mohsen Maadani, Seyed Ahmad Motamedi
Abstract:
The use of wireless technology in industrial networks has gained vast attraction in recent years. In this paper, we have thoroughly analyzed the effect of contention window (CW) size on the performance of IEEE 802.11-based industrial wireless networks (IWN), from delay and reliability perspective. Results show that the default values of CWmin, CWmax, and retry limit (RL) are far from the optimum performance due to the industrial application characteristics, including short packet and noisy environment. In this paper, an adaptive CW algorithm (payload-dependent) has been proposed to minimize the average delay. Finally a simple, but effective CW and RL setting has been proposed for industrial applications which outperforms the minimum-average-delay solution from maximum delay and jitter perspective, at the cost of a little higher average delay. Simulation results show an improvement of up to 20%, 25%, and 30% in average delay, maximum delay and jitter respectively.Keywords: Average Delay, Contention Window, Distributed Coordination Function (DCF), Jitter, Industrial Wireless Network (IWN), Maximum Delay, Reliability, Retry Limit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20353581 The Survey Research and Evaluation of Green Residential Building Based on the Improved Group Analytical Hierarchy Process Method in Yinchuan
Abstract:
Due to the economic downturn and the deterioration of the living environment, the development of residential buildings as high energy consuming building is gradually changing from “extensive” to green building in China. So, the evaluation system of green building is continuously improved, but the current evaluation work has the following problems: (1) There are differences in the cost of the actual investment and the purchasing power of residents, also construction target of green residential building is single and lacks multi-objective performance development. (2) Green building evaluation lacks regional characteristics and cannot reflect the different regional residents demand. (3) In the process of determining the criteria weight, the experts’ judgment matrix is difficult to meet the requirement of consistency. Therefore, to solve those problems, questionnaires which are about the green residential building for Ningxia area are distributed, and the results of questionnaires can feedback the purchasing power of residents and the acceptance of the green building cost. Secondly, combined with the geographical features of Ningxia minority areas, the evaluation criteria system of green residential building is constructed. Finally, using the improved group AHP method and the grey clustering method, the criteria weight is determined, and a real case is evaluated, which is located in Xing Qing district, Ningxia. A conclusion can be obtained that the professional evaluation for this project and good social recognition is basically the same.
Keywords: Evaluation, green residential building, grey clustering method, group AHP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8273580 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools
Abstract:
Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.
Keywords: Block matching, digital evidence, hash list.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13583579 Removal of Malachite Green from Aqueous Solution using Hydrilla verticillata -Optimization, Equilibrium and Kinetic Studies
Authors: R. Rajeshkannan, M. Rajasimman, N. Rajamohan
Abstract:
In this study, the sorption of Malachite green (MG) on Hydrilla verticillata biomass, a submerged aquatic plant, was investigated in a batch system. The effects of operating parameters such as temperature, adsorbent dosage, contact time, adsorbent size, and agitation speed on the sorption of Malachite green were analyzed using response surface methodology (RSM). The proposed quadratic model for central composite design (CCD) fitted very well to the experimental data that it could be used to navigate the design space according to ANOVA results. The optimum sorption conditions were determined as temperature - 43.5oC, adsorbent dosage - 0.26g, contact time - 200min, adsorbent size - 0.205mm (65mesh), and agitation speed - 230rpm. The Langmuir and Freundlich isotherm models were applied to the equilibrium data. The maximum monolayer coverage capacity of Hydrilla verticillata biomass for MG was found to be 91.97 mg/g at an initial pH 8.0 indicating that the optimum sorption initial pH. The external and intra particle diffusion models were also applied to sorption data of Hydrilla verticillata biomass with MG, and it was found that both the external diffusion as well as intra particle diffusion contributes to the actual sorption process. The pseudo-second order kinetic model described the MG sorption process with a good fitting.
Keywords: Response surface methodology, Hydrilla verticillata, malachite green, adsorption, central composite design
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19903578 An Adaptive Memetic Algorithm With Dynamic Population Management for Designing HIV Multidrug Therapies
Authors: Hassan Zarei, Ali Vahidian Kamyad, Sohrab Effati
Abstract:
In this paper, a mathematical model of human immunodeficiency virus (HIV) is utilized and an optimization problem is proposed, with the final goal of implementing an optimal 900-day structured treatment interruption (STI) protocol. Two type of commonly used drugs in highly active antiretroviral therapy (HAART), reverse transcriptase inhibitors (RTI) and protease inhibitors (PI), are considered. In order to solving the proposed optimization problem an adaptive memetic algorithm with population management (AMAPM) is proposed. The AMAPM uses a distance measure to control the diversity of population in genotype space and thus preventing the stagnation and premature convergence. Moreover, the AMAPM uses diversity parameter in phenotype space to dynamically set the population size and the number of crossovers during the search process. Three crossover operators diversify the population, simultaneously. The progresses of crossover operators are utilized to set the number of each crossover per generation. In order to escaping the local optima and introducing the new search directions toward the global optima, two local searchers assist the evolutionary process. In contrast to traditional memetic algorithms, the activation of these local searchers is not random and depends on both the diversity parameters in genotype space and phenotype space. The capability of AMAPM in finding optimal solutions compared with three popular metaheurestics is introduced.Keywords: HIV therapy design, memetic algorithms, adaptivealgorithms, nonlinear integer programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16283577 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: G. Candel, D. Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.
Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4893576 Intelligent Modeling of the Electrical Activity of the Human Heart
Authors: Lambros V. Skarlas, Grigorios N. Beligiannis, Efstratios F. Georgopoulos, Adam V. Adamopoulos
Abstract:
The aim of this contribution is to present a new approach in modeling the electrical activity of the human heart. A recurrent artificial neural network is being used in order to exhibit a subset of the dynamics of the electrical behavior of the human heart. The proposed model can also be used, when integrated, as a diagnostic tool of the human heart system. What makes this approach unique is the fact that every model is being developed from physiological measurements of an individual. This kind of approach is very difficult to apply successfully in many modeling problems, because of the complexity and entropy of the free variables describing the complex system. Differences between the modeled variables and the variables of an individual, measured at specific moments, can be used for diagnostic purposes. The sensor fusion used in order to optimize the utilization of biomedical sensors is another point that this paper focuses on. Sensor fusion has been known for its advantages in applications such as control and diagnostics of mechanical and chemical processes.Keywords: Artificial Neural Networks, Diagnostic System, Health Condition Modeling Tool, Heart Diagnostics Model, Heart Electricity Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18273575 A Learner-Centred or Artefact-Centred Classroom? Impact of Technology, Artefacts, and Environment on Task Processes in an English as a Foreign Language Classroom
Authors: Nobue T. Ellis
Abstract:
This preliminary study attempts to see if a learning environment influences instructor’s teaching strategies and learners’ in-class activities in a foreign language class at a university in Japan. The class under study was conducted in a computer room, while the majority of classes of the same course were offered in traditional classrooms without computers. The study also sees if the unplanned blended learning environment, enhanced, or worked against, in achieving course goals, by paying close attention to in-class artefacts, such as computers. In the macro-level analysis, the course syllabus and weekly itinerary of the course were looked at; and in the microlevel analysis, nonhuman actors in their environments were named and analyzed to see how they influenced the learners’ task processes. The result indicated that students were heavily influenced by the presence of computers, which lead them to disregard some aspects of intended learning objectives.
Keywords: Computer-assisted language learning, actor-network theory, English as a foreign language, task-based teaching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16103574 An Implementation of a Configurable UART-to-Ethernet Converter
Authors: Jungho Moon, Myunggon Yoon
Abstract:
This paper presents an implementation of a configurable UART-to-Ethernet converter using an ARM-based 32-bit microcontroller as well as a dedicated configuration program running on a PC for configuring the operating parameters of the converter. The program was written in Python. Various parameters pertaining to the operation of the converter can be modified by the configuration program through the Ethernet interface of the converter. The converter supports 3 representative asynchronous serial communication protocols, RS-232, RS-422, and RS-485 and supports 3 network modes, TCP/IP server, TCP/IP client, and UDP client. The TCP/IP and UDP protocols were implemented on the microcontroller using an open source TCP/IP protocol stack called lwIP (A lightweight TCP/IP) and FreeRTOS, a free real-time operating system for embedded systems. Due to the use of a real-time operating system, the firmware of the converter was implemented as a multi-thread application and as a result becomes more modular and easier to develop. The converter can provide a seamless bridge between a serial port and an Ethernet port, thereby allowing existing legacy apparatuses with no Ethernet connectivity to communicate using the Ethernet protocol.
Keywords: Converter, embedded systems, Ethernet, lwIP, UART.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13973573 Coupling Phenomenon between the Lightning and High Voltage Networks
Authors: Dib Djalel, Haddouche Ali, Chellali Benachiba
Abstract:
When a lightning strike falls near an overhead power line, the intense electromagnetic field radiated by the current of the lightning return stroke coupled with power lines and there induced transient overvoltages, which can cause a back-flashover in electrical network. The indirect lightning represents a major danger owing to the fact that it is more frequent than that which results from the direct strikes. In this paper we present an analysis of the electromagnetic coupling between an external electromagnetic field generated by the lightning and an electrical overhead lines, so we give an important and original contribution: We are based on our experimental measurements which we carried in the high voltage laboratories of EPFL in Switzerland during the last trimester of 2005, on the recent works of other authors and with our mathematical improvement a new particular analytical expression of the electromagnetic field generated by the lightning return stroke was developed and presented in this paper. The results obtained by this new electromagnetic field formulation were compared with experimental results and give a reasonable approach.Keywords: Lightning, overhead lines, electromagneticcoupling, return stroke, models, induced overvoltages.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14873572 Studying the Value-Added Chain for the Fish Distribution Process at Quang Binh Fishing Port in Vietnam
Authors: Van Chung Nguyen
Abstract:
The purpose of this research is to study the current status of the value chain for fish distribution at Quang Binh Fishing Port with 360 research samples, in which the research subjects are fishermen, traders, retailers, and businesses. The research uses the approach of applying the value chain theoretical framework of Kaplinsky and Morris to quantify and describe market channels and actors participating in the value chain and analyze the value-added process of these companies according to market channels. The analysis results show that fishermen directly catch fish with high economic efficiency, but processing enterprises and, especially retailers, are the agents to obtain higher added value. Processing enterprises play a role that is not really clear due to outdated processing technology; in contrast, retailers have the highest added value. This shows that the added value of the fish supply chain at Quang Binh fishing port is still limited, leading to low output quality. Therefore, the selling price of fish to the market is still high compared to the abundant fish resources, leading to low consumption and limiting exports due to the quality of processing enterprises. This reduces demand and fishing capacity, and productivity is lower than potential. To improve the fish value chain at fishing ports, it is necessary to focus on improving product quality, strengthening linkages between actors, building brands and product consumption markets at the same time, improving the capacity of export processing enterprises.
Keywords: Quang Binh fishing port, value chain, fish market, distributions channel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 753571 An Agent-Based Modelling Simulation Approach to Calculate Processing Delay of GEO Satellite Payload
Authors: V. Vicente E. Mujica, Gustavo Gonzalez
Abstract:
The global coverage of broadband multimedia and internet-based services in terrestrial-satellite networks demand particular interests for satellite providers in order to enhance services with low latencies and high signal quality to diverse users. In particular, the delay of on-board processing is an inherent source of latency in a satellite communication that sometimes is discarded for the end-to-end delay of the satellite link. The frame work for this paper includes modelling of an on-orbit satellite payload using an agent model that can reproduce the properties of processing delays. In essence, a comparison of different spatial interpolation methods is carried out to evaluate physical data obtained by an GEO satellite in order to define a discretization function for determining that delay. Furthermore, the performance of the proposed agent and the development of a delay discretization function are together validated by simulating an hybrid satellite and terrestrial network. Simulation results show high accuracy according to the characteristics of initial data points of processing delay for Ku bands.Keywords: Terrestrial-satellite networks, latency, on-orbit satellite payload, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8883570 Specific Emitter Identification Based on Refined Composite Multiscale Dispersion Entropy
Authors: Shaoying Guo, Yanyun Xu, Meng Zhang, Weiqing Huang
Abstract:
The wireless communication network is developing rapidly, thus the wireless security becomes more and more important. Specific emitter identification (SEI) is an vital part of wireless communication security as a technique to identify the unique transmitters. In this paper, a SEI method based on multiscale dispersion entropy (MDE) and refined composite multiscale dispersion entropy (RCMDE) is proposed. The algorithms of MDE and RCMDE are used to extract features for identification of five wireless devices and cross-validation support vector machine (CV-SVM) is used as the classifier. The experimental results show that the total identification accuracy is 99.3%, even at low signal-to-noise ratio(SNR) of 5dB, which proves that MDE and RCMDE can describe the communication signal series well. In addition, compared with other methods, the proposed method is effective and provides better accuracy and stability for SEI.Keywords: Cross-validation support vector machine, refined composite multiscale dispersion entropy, specific emitter identification, transient signal, wireless communication device.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 857