Search results for: Network monitoring
320 Exploring the Activity Fabric of an Intelligent Environment with Hierarchical Hidden Markov Theory
Authors: Chiung-Hui Chen
Abstract:
The Internet of Things (IoT) was designed for widespread convenience. With the smart tag and the sensing network, a large quantity of dynamic information is immediately presented in the IoT. Through the internal communication and interaction, meaningful objects provide real-time services for users. Therefore, the service with appropriate decision-making has become an essential issue. Based on the science of human behavior, this study employed the environment model to record the time sequences and locations of different behaviors and adopted the probability module of the hierarchical Hidden Markov Model for the inference. The statistical analysis was conducted to achieve the following objectives: First, define user behaviors and predict the user behavior routes with the environment model to analyze user purposes. Second, construct the hierarchical Hidden Markov Model according to the logic framework, and establish the sequential intensity among behaviors to get acquainted with the use and activity fabric of the intelligent environment. Third, establish the intensity of the relation between the probability of objects’ being used and the objects. The indicator can describe the possible limitations of the mechanism. As the process is recorded in the information of the system created in this study, these data can be reused to adjust the procedure of intelligent design services.Keywords: Behavior, big data, hierarchical Hidden Markov Model, intelligent object.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 764319 Artificial Intelligence Model to Predict Surface Roughness of Ti-15-3 Alloy in EDM Process
Authors: Md. Ashikur Rahman Khan, M. M. Rahman, K. Kadirgama, M.A. Maleque, Rosli A. Bakar
Abstract:
Conventionally the selection of parameters depends intensely on the operator-s experience or conservative technological data provided by the EDM equipment manufacturers that assign inconsistent machining performance. The parameter settings given by the manufacturers are only relevant with common steel grades. A single parameter change influences the process in a complex way. Hence, the present research proposes artificial neural network (ANN) models for the prediction of surface roughness on first commenced Ti-15-3 alloy in electrical discharge machining (EDM) process. The proposed models use peak current, pulse on time, pulse off time and servo voltage as input parameters. Multilayer perceptron (MLP) with three hidden layer feedforward networks are applied. An assessment is carried out with the models of distinct hidden layer. Training of the models is performed with data from an extensive series of experiments utilizing copper electrode as positive polarity. The predictions based on the above developed models have been verified with another set of experiments and are found to be in good agreement with the experimental results. Beside this they can be exercised as precious tools for the process planning for EDM.Keywords: Ti-15l-3, surface roughness, copper, positive polarity, multi-layered perceptron.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907318 Plasma Properties Effect on Fluorescent Tube Plasma Antenna Performance
Authors: A. N. Dagang, E. I. Ismail, Z. Zakaria
Abstract:
This paper presents the analysis on the performance of monopole antenna with fluorescent tubes. In this research, the simulation and experimental approach is conducted. The fluorescent tube with different length and size is designed using Computer Simulation Technology (CST) software and the characteristics of antenna parameter are simulated throughout the software. CST was used to simulate antenna parameters such as return loss, resonant frequency, gain and directivity. Vector Network Analyzer (VNA) was used to measure the return loss of plasma antenna in order to validate the simulation results. In the simulation and experiment, the supply frequency is set starting from 1 GHz to 10 GHz. The results show that the return loss of plasma antenna changes when size of fluorescent tubes is varied, correspond to the different plasma properties. It shows that different values of plasma properties such as plasma frequency and collision frequency gives difference result of return loss, gain and directivity. For the gain, the values range from 2.14 dB to 2.36 dB. The return loss of plasma antenna offers higher value range from -22.187 dB to -32.903 dB. The higher the values of plasma frequency and collision frequency, the higher return loss can be obtained. The values obtained are comparative to the conventional type of metal antenna.
Keywords: Plasma antenna, fluorescent tube, computer simulation technology, plasma parameters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666317 Developing a Web-Based Workflow Management System in Cloud Computing Platforms
Authors: Wang Shuen-Tai, Lin Yu-Ching, Chang Hsi-Ya
Abstract:
Cloud computing is the innovative and leading information technology model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort. In this paper, we aim at the development of workflow management system for cloud computing platforms based on our previous research on the dynamic allocation of the cloud computing resources and its workflow process. We took advantage of the HTML5 technology and developed web-based workflow interface. In order to enable the combination of many tasks running on the cloud platform in sequence, we designed a mechanism and developed an execution engine for workflow management on clouds. We also established a prediction model which was integrated with job queuing system to estimate the waiting time and cost of the individual tasks on different computing nodes, therefore helping users achieve maximum performance at lowest payment. This proposed effort has the potential to positively provide an efficient, resilience and elastic environment for cloud computing platform. This development also helps boost user productivity by promoting a flexible workflow interface that lets users design and control their tasks' flow from anywhere.Keywords: Web-based, workflow, HTML5, Cloud Computing, Queuing System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2911316 A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification
Authors: Niousha Bagheri Khulenjani, Mohammad Saniee Abadeh
Abstract:
Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.
Keywords: Cancer classification, feature selection, deep learning, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1271315 Resilient Machine Learning in the Nuclear Industry: Crack Detection as a Case Study
Authors: Anita Khadka, Gregory Epiphaniou, Carsten Maple
Abstract:
There is a dramatic surge in the adoption of Machine Learning (ML) techniques in many areas, including the nuclear industry (such as fault diagnosis and fuel management in nuclear power plants), autonomous systems (including self-driving vehicles), space systems (space debris recovery, for example), medical surgery, network intrusion detection, malware detection, to name a few. Artificial Intelligence (AI) has become a part of everyday modern human life. To date, the predominant focus has been developing underpinning ML algorithms that can improve accuracy, while factors such as resiliency and robustness of algorithms have been largely overlooked. If an adversarial attack is able to compromise the learning method or data, the consequences can be fatal, especially but not exclusively in safety-critical applications. In this paper, we present an in-depth analysis of five adversarial attacks and two defence methods on a crack detection ML model. Our analysis shows that it can be dangerous to adopt ML techniques without rigorous testing, since they may be vulnerable to adversarial attacks, especially in security-critical areas such as the nuclear industry. We observed that while the adopted defence methods can effectively defend against different attacks, none of them could protect against all five adversarial attacks entirely.
Keywords: Resilient Machine Learning, attacks, defences, nuclear industry, crack detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 500314 Development of the Maturity Sensor Prototype and Method of Its Placement in the Structure
Authors: Ye. B. Utepov, A. S. Tulebekova, A. B. Kazkeyev
Abstract:
Maturity sensors are used to determine concrete strength by the non-destructive method. The method of placement of the maturity sensors determines their number required for a certain frame of a monolithic building. This paper proposes a cheap prototype of an embedded wireless sensor for monitoring concrete structures, as well as an alternative strategy for placing sensors based on the transitional boundaries of the temperature distribution of concrete curing, which were determined by building a heat map of the temperature distribution, where unknown values are calculated by the method of inverse distance weighing. The developed prototype can simultaneously measure temperature and relative humidity over a smartphone-controlled time interval. It implements a maturity method to assess the in-situ strength of concrete, which is considered an alternative to the traditional shock impulse and compression testing method used in Kazakhstan. The prototype was tested in laboratory and field conditions. The tests were aimed at studying the effect of internal and external temperature and relative humidity on concrete's strength gain. Based on an experimentally poured concrete slab with randomly integrated maturity sensors, it the transition boundaries form elliptical forms were determined. Temperature distribution over the largest diameter of the ellipses was plotted, resulting in correct and inverted parabolas. As a result, the distance between the closest opposite crossing points of the parabolas is accepted as the maximum permissible step for setting the maturity sensors. The proposed placement strategy can be applied to sensors that measure various continuous phenomena such as relative humidity. Prototype testing has also revealed Bluetooth inconvenience due to weak signal and inability to access multiple prototypes simultaneously. For this reason, further prototype upgrades are planned in the future work.
Keywords: Heat map, placement strategy, temperature and relative humidity, wireless embedded sensor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 365313 Kinetic Modeling of the Fischer-Tropsch Reactions and Modeling Steady State Heterogeneous Reactor
Authors: M. Ahmadi Marvast, M. Sohrabi, H. Ganji
Abstract:
The rate of production of main products of the Fischer-Tropsch reactions over Fe/HZSM5 bifunctional catalyst in a fixed bed reactor is investigated at a broad range of temperature, pressure, space velocity, H2/CO feed molar ratio and CO2, CH4 and water flow rates. Model discrimination and parameter estimation were performed according to the integral method of kinetic analysis. Due to lack of mechanism development for Fisher – Tropsch Synthesis on bifunctional catalysts, 26 different models were tested and the best model is selected. Comprehensive one and two dimensional heterogeneous reactor models are developed to simulate the performance of fixed-bed Fischer – Tropsch reactors. To reduce computational time for optimization purposes, an Artificial Feed Forward Neural Network (AFFNN) has been used to describe intra particle mass and heat transfer diffusion in the catalyst pellet. It is seen that products' reaction rates have direct relation with H2 partial pressure and reverse relation with CO partial pressure. The results show that the hybrid model has good agreement with rigorous mechanistic model, favoring that the hybrid model is about 25-30 times faster.
Keywords: Fischer-Tropsch, heterogeneous modeling, kinetic study.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2820312 A Novel Approach for Protein Classification Using Fourier Transform
Authors: A. F. Ali, D. M. Shawky
Abstract:
Discovering new biological knowledge from the highthroughput biological data is a major challenge to bioinformatics today. To address this challenge, we developed a new approach for protein classification. Proteins that are evolutionarily- and thereby functionally- related are said to belong to the same classification. Identifying protein classification is of fundamental importance to document the diversity of the known protein universe. It also provides a means to determine the functional roles of newly discovered protein sequences. Our goal is to predict the functional classification of novel protein sequences based on a set of features extracted from each protein sequence. The proposed technique used datasets extracted from the Structural Classification of Proteins (SCOP) database. A set of spectral domain features based on Fast Fourier Transform (FFT) is used. The proposed classifier uses multilayer back propagation (MLBP) neural network for protein classification. The maximum classification accuracy is about 91% when applying the classifier to the full four levels of the SCOP database. However, it reaches a maximum of 96% when limiting the classification to the family level. The classification results reveal that spectral domain contains information that can be used for classification with high accuracy. In addition, the results emphasize that sequence similarity measures are of great importance especially at the family level.
Keywords: Bioinformatics, Artificial Neural Networks, Protein Sequence Analysis, Feature Extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2360311 MONPAR - A Page Replacement Algorithm for a Spatiotemporal Database
Authors: U. Kalay, O. Kalıpsız
Abstract:
For a spatiotemporal database management system, I/O cost of queries and other operations is an important performance criterion. In order to optimize this cost, an intense research on designing robust index structures has been done in the past decade. With these major considerations, there are still other design issues that deserve addressing due to their direct impact on the I/O cost. Having said this, an efficient buffer management strategy plays a key role on reducing redundant disk access. In this paper, we proposed an efficient buffer strategy for a spatiotemporal database index structure, specifically indexing objects moving over a network of roads. The proposed strategy, namely MONPAR, is based on the data type (i.e. spatiotemporal data) and the structure of the index structure. For the purpose of an experimental evaluation, we set up a simulation environment that counts the number of disk accesses while executing a number of spatiotemporal range-queries over the index. We reiterated simulations with query sets with different distributions, such as uniform query distribution and skewed query distribution. Based on the comparison of our strategy with wellknown page-replacement techniques, like LRU-based and Prioritybased buffers, we conclude that MONPAR behaves better than its competitors for small and medium size buffers under all used query-distributions.Keywords: Buffer Management, Spatiotemporal databases.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1476310 Application of Machine Learning Methods to Online Test Error Detection in Semiconductor Test
Authors: Matthias Kirmse, Uwe Petersohn, Elief Paffrath
Abstract:
As in today's semiconductor industries test costs can make up to 50 percent of the total production costs, an efficient test error detection becomes more and more important. In this paper, we present a new machine learning approach to test error detection that should provide a faster recognition of test system faults as well as an improved test error recall. The key idea is to learn a classifier ensemble, detecting typical test error patterns in wafer test results immediately after finishing these tests. Since test error detection has not yet been discussed in the machine learning community, we define central problem-relevant terms and provide an analysis of important domain properties. Finally, we present comparative studies reflecting the failure detection performance of three individual classifiers and three ensemble methods based upon them. As base classifiers we chose a decision tree learner, a support vector machine and a Bayesian network, while the compared ensemble methods were simple and weighted majority vote as well as stacking. For the evaluation, we used cross validation and a specially designed practical simulation. By implementing our approach in a semiconductor test department for the observation of two products, we proofed its practical applicability.
Keywords: Ensemble methods, fault detection, machine learning, semiconductor test.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2274309 Effect on Physicochemical and Sensory Attributes of Bread Substituted with Different Levels of Matured Soursop (Anona muricata) Flour
Authors: Mardiana Ahamad Zabidi, Akmalluddin Md. Yunus
Abstract:
Soursop (Anona muricata) is one of the underutilized tropical fruits containing nutrients, particularly dietary fibre and antioxidant properties that are beneficial to human health. This objective of this study is to investigate the feasibility of matured soursop pulp flour (SPF) to be substituted with high-protein wheat flour in bread. Bread formulation was substituted with different levels of SPF (0%, 5%, 10% and 15%). The effect on physicochemical properties and sensory attributes were evaluated. Higher substitution level of SPF resulted in significantly higher (p<0.05) fibre, protein and ash content, while fat and carbohydrate content reduced significantly (p<0.05). FESEM showed that the bread crumb surface of control and 5% SPF appeared to distribute evenly and coalesced by thin gluten film. However, higher SPF substitution level in bread formulation exhibited a deleterious effect by formation of discontinuous gluten network. For texture profile analysis, 5% SPF bread resulted in the lowest value of hardness. The score of sensory evaluation showed that 5% SPF bread received good acceptability and is comparable with control bread.
Keywords: Bread, Physicochemical properties, Scanning electron microscopy (SEM), Sensory attributes, Soursop pulp flour.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3154308 An Advanced Approach Based on Artificial Neural Networks to Identify Environmental Bacteria
Authors: Mauro Giacomini, Stefania Bertone, Federico Caneva Soumetz, Carmelina Ruggiero
Abstract:
Environmental micro-organisms include a large number of taxa and some species that are generally considered nonpathogenic, but can represent a risk in certain conditions, especially for elderly people and immunocompromised individuals. Chemotaxonomic identification techniques are powerful tools for environmental micro-organisms, and cellular fatty acid methyl esters (FAME) content is a powerful fingerprinting identification technique. A system based on an unsupervised artificial neural network (ANN) was set up using the fatty acid profiles of standard bacterial strains, obtained by gas-chromatography, used as learning data. We analysed 45 certified strains belonging to Acinetobacter, Aeromonas, Alcaligenes, Aquaspirillum, Arthrobacter, Bacillus, Brevundimonas, Enterobacter, Flavobacterium, Micrococcus, Pseudomonas, Serratia, Shewanella and Vibrio genera. A set of 79 bacteria isolated from a drinking water line (AMGA, the major water supply system in Genoa) were used as an example for identification compared to standard MIDI method. The resulting ANN output map was found to be a very powerful tool to identify these fresh isolates.
Keywords: Cellular fatty acid methyl esters, environmental bacteria, gas-chromatography, unsupervised ANN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1840307 The Reason of Principles of Construction Engineering and Management Being Necessary for Contracting Firms and Their Projects Managers
Authors: Mamoon Mousa Atout
Abstract:
The industries of construction are in continuous growth not only in Middle East rejoin but almost all over the world. For the last fifteen years, big expansion and increase of different types of projects has been observed. Many infrastructural projects have been developed, high rise buildings, big shopping malls, power sub-stations, roads, bridges, schools, universities and developing many of new cities with full and complete facilities. The growth and enlargement of the mentioned developed projects has been accomplished through many international and local contracting organizations. Senior management of these organizations depend on their qualified and experienced team whom are aware of the implications of project management, construction management, engineering management and resource management during tendering till final completion of the project. This research aims to find out why reasons of principles of construction engineering and management are necessary for contracting firms and their managers. Principles of construction management help contracting organizations to accomplish and deliver projects without delay. This can be maintained by establishing guidelines’ details for updating the adopted system of construction management that they have through qualified and experienced project managers. The research focuses on benefits of other essential skills of projects planning, monitoring and control. Defining roles and responsibilities of contractor project managers during tendering and execution is a part of the investigated factors that will be analyzed. Other skills like optimizing and utilizing the obtainable project resources to deliver the project within time, cost and quality will be also investigated to find out how these factors are affecting the performance of contracting firms, projects managers and projects. The conclusion of the research will help senior management team and the contractors project managers about the benefits of implications and benefits construction management system and its effect upon the performance and knowledge of contract values that they have, and the optimal profit margin of the firm it.
Keywords: Construction management, contracting firms, project managers, planning processes, roles and responsibilities.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1736306 Production and Application of Organic Waste Compost for Urban Agriculture in Emerging Cities
Authors: Alemayehu Agizew Woldeamanuel, Mekonnen Maschal Tarekegn, Raj Mohan Balakrishina
Abstract:
Composting is one of the conventional techniques adopted for organic waste management but the practice is very limited in emerging cities despite that most of the waste generated is organic. This paper aims to examine the viability of composting for organic waste management in the emerging city of Addis Ababa, Ethiopia by addressing the composting practice, quality of compost and application of compost in urban agriculture. The study collects data using compost laboratory testing and urban farm households’ survey and uses descriptive analysis on the state of compost production and application, physicochemical analysis of the compost samples, and regression analysis on the urban farmer’s willingness to pay for compost. The findings of the study indicated that there is composting practice at a small scale, most of the producers use unsorted feedstock materials, aerobic composting is dominantly used and the maturation period ranged from four to 10 weeks. The carbon content of the compost ranges from 30.8 to 277.1 due to the type of feedstock applied and this surpasses the ideal proportions for C:N ratio. The total nitrogen, pH, organic matter and moisture content are relatively optimal. The levels of heavy metals measured for Mn, Cu, Pb, Cd and Cr6+ in the compost samples are also insignificant. In the urban agriculture sector, chemical fertilizer is the dominant type of soil input in crop productions but vegetable producers use a combination of both fertilizer and other organic inputs including compost. The willingness to pay for compost depends on income, household size, gender, type of soil inputs, monitoring soil fertility, the main product of the farm, farming method and farm ownership. Finally, this study recommends the need for collaboration among stakeholders along the value chain of waste, awareness creation on the benefits of composting and addressing challenges faced by both compost producers and users.
Keywords: Composting, emerging city, organic waste management, urban agriculture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066305 Impact of Solar Energy Based Power Grid for Future Prospective of Pakistan
Authors: Muhammd Usman Sardar, Mazhar Hussain Baloch, Muhammad Shahbaz Ahmad, Zahir Javed Paracha
Abstract:
Shortfall of electrical energy in Pakistan is a challenge adversely affecting its industrial output and social growth. As elsewhere, Pakistan derives its electrical energy from a number of conventional sources. The exhaustion of petroleum and conventional resources, the rising costs coupled with extremely adverse climatic effects are taking its toll especially on the under-developed countries like Pakistan. As alternate, renewable energy sources like hydropower, solar, wind, even bio-energy and a mix of some or all of them could provide a credible alternative to the conventional energy resources that would not only be cleaner but sustainable as well. As a model, solar energy-based power grid for the near future has been attempted to offset the energy shortfalls as a mix with our existing sustainable natural energy resources. An assessment of solar energy potential for electricity generation is being presented for fulfilling the energy demands with higher level of reliability and sustainability. This model is based on the premise that solar energy potential of Pakistan is not only reliable but also sustainable. This research estimates the present & future approaching renewable energy resource specially the impact of solar energy based power grid for mitigating energy shortage in Pakistan.
Keywords: Powergrid network, solar photovoltaic (SPV) setups, solar power generation, solar energy technology (SET).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3450304 Simultaneous HPAM/SDS Injection in Heterogeneous/Layered Models
Authors: M. H. Sedaghat, A. Zamani, S. Morshedi, R. Janamiri, M. Safdari, I. Mahdavi, A. Hosseini, A. Hatampour
Abstract:
Although lots of experiments have been done in enhanced oil recovery, the number of experiments which consider the effects of local and global heterogeneity on efficiency of enhanced oil recovery based on the polymer-surfactant flooding is low and rarely done. In this research, we have done numerous experiments of water flooding and polymer-surfactant flooding on a five spot glass micromodel in different conditions such as different positions of layers. In these experiments, five different micromodels with three different pore structures are designed. Three models with different layer orientation, one homogenous model and one heterogeneous model are designed. In order to import the effect of heterogeneity of porous media, three types of pore structures are distributed accidentally and with equal ratio throughout heterogeneous micromodel network according to random normal distribution. The results show that maximum EOR recovery factor will happen in a situation where the layers are orthogonal to the path of mainstream and the minimum EOR recovery factor will happen in a situation where the model is heterogeneous. This experiments show that in polymer-surfactant flooding, with increase of angles of layers the EOR recovery factor will increase and this recovery factor is strongly affected by local heterogeneity around the injection zone.
Keywords: Layered Reservoir, Micromodel, Local Heterogeneity, Polymer-Surfactant Flooding, Enhanced Oil Recovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219303 The Design and Analysis of Learning Effects for a Game-based Learning System
Authors: Wernhuar Tarng, Weichian Tsai
Abstract:
The major purpose of this study is to use network and multimedia technologies to build a game-based learning system for junior high school students to apply in learning “World Geography" through the “role-playing" game approaches. This study first investigated the motivation and habits of junior high school students to use the Internet and online games, and then designed a game-based learning system according to situated and game-based learning theories. A teaching experiment was conducted to analyze the learning effectiveness of students on the game-based learning system and the major factors affecting their learning. A questionnaire survey was used to understand the students- attitudes towards game-based learning. The results showed that the game-based learning system can enhance students- learning, but the gender of students and their habits in using the Internet have no significant impact on learning. Game experience has a significant impact on students- learning, and the higher the experience value the better the effectiveness of their learning. The results of questionnaire survey also revealed that the system can increase students- motivation and interest in learning "World Geography".
Keywords: Game-based learning, situated learning, role playing, learning effectiveness, learning motivation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2595302 Exploiting Two Intelligent Models to Predict Water Level: A Field Study of Urmia Lake, Iran
Authors: Shahab Kavehkar, Mohammad Ali Ghorbani, Valeriy Khokhlov, Afshin Ashrafzadeh, Sabereh Darbandi
Abstract:
Water level forecasting using records of past time series is of importance in water resources engineering and management. For example, water level affects groundwater tables in low-lying coastal areas, as well as hydrological regimes of some coastal rivers. Then, a reliable prediction of sea-level variations is required in coastal engineering and hydrologic studies. During the past two decades, the approaches based on the Genetic Programming (GP) and Artificial Neural Networks (ANN) were developed. In the present study, the GP is used to forecast daily water level variations for a set of time intervals using observed water levels. The measurements from a single tide gauge at Urmia Lake, Northwest Iran, were used to train and validate the GP approach for the period from January 1997 to July 2008. Statistics, the root mean square error and correlation coefficient, are used to verify model by comparing with a corresponding outputs from Artificial Neural Network model. The results show that both these artificial intelligence methodologies are satisfactory and can be considered as alternatives to the conventional harmonic analysis.
Keywords: Water-Level variation, forecasting, artificial neural networks, genetic programming, comparative analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2332301 Providing a Secure, Reliable and Decentralized Document Management Solution Using Blockchain by a Virtual Identity Card
Authors: Meet Shah, Ankita Aditya, Dhruv Bindra, V. S. Omkar, Aashruti Seervi
Abstract:
In today's world, we need documents everywhere for a smooth workflow in the identification process or any other security aspects. The current system and techniques which are used for identification need one thing, that is ‘proof of existence’, which involves valid documents, for example, educational, financial, etc. The main issue with the current identity access management system and digital identification process is that the system is centralized in their network, which makes it inefficient. The paper presents the system which resolves all these cited issues. It is based on ‘blockchain’ technology, which is a 'decentralized system'. It allows transactions in a decentralized and immutable manner. The primary notion of the model is to ‘have everything with nothing’. It involves inter-linking required documents of a person with a single identity card so that a person can go anywhere without having the required documents with him/her. The person just needs to be physically present at a place wherein documents are necessary, and using a fingerprint impression and an iris scan print, the rest of the verification will progress. Furthermore, some technical overheads and advancements are listed. This paper also aims to layout its far-vision scenario of blockchain and its impact on future trends.
Keywords: Blockchain, decentralized system, fingerprint impression, identity management, iris scan.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1303300 Highly Accurate Target Motion Compensation Using Entropy Function Minimization
Authors: Amin Aghatabar Roodbary, Mohammad Hassan Bastani
Abstract:
One of the defects of stepped frequency radar systems is their sensitivity to target motion. In such systems, target motion causes range cell shift, false peaks, Signal to Noise Ratio (SNR) reduction and range profile spreading because of power spectrum interference of each range cell in adjacent range cells which induces distortion in High Resolution Range Profile (HRRP) and disrupt target recognition process. Thus Target Motion Parameters (TMPs) effects compensation should be employed. In this paper, such a method for estimating TMPs (velocity and acceleration) and consequently eliminating or suppressing the unwanted effects on HRRP based on entropy minimization has been proposed. This method is carried out in two major steps: in the first step, a discrete search method has been utilized over the whole acceleration-velocity lattice network, in a specific interval seeking to find a less-accurate minimum point of the entropy function. Then in the second step, a 1-D search over velocity is done in locus of the minimum for several constant acceleration lines, in order to enhance the accuracy of the minimum point found in the first step. The provided simulation results demonstrate the effectiveness of the proposed method.
Keywords: ATR, HRRP, motion compensation, SFW, TMP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 657299 Optimizing Organizational Performance: The Critical Role of Headcount Budgeting in Strategic Alignment and Financial Stability
Authors: Shobhit Mittal
Abstract:
Headcount budgeting stands as a pivotal element in organizational financial management, extending beyond traditional budgeting to encompass strategic resource allocation for workforce-related expenses. This process is integral to maintaining financial stability and fostering a productive workforce, requiring a comprehensive analysis of factors such as market trends, business growth projections, and evolving workforce skill requirements. It demands a collaborative approach, primarily involving Human Resources (HR) and finance departments, to align workforce planning with an organization's financial capabilities and strategic objectives. The dynamic nature of headcount budgeting necessitates continuous monitoring and adjustment in response to economic fluctuations, business strategy shifts, technological advancements, and market dynamics. Its significance in talent management is also highlighted, aligning financial planning with talent acquisition and retention strategies to ensure a competitive edge in the market. The consequences of incorrect headcount budgeting are explored, showing how it can lead to financial strain, operational inefficiencies, and hindered strategic objectives. Examining case studies like IBM's strategic workforce rebalancing and Microsoft's shift for long-term success, the importance of aligning headcount budgeting with organizational goals is underscored. These examples illustrate that effective headcount budgeting transcends its role as a financial tool, emerging as a strategic element crucial for an organization's success. This necessitates continuous refinement and adaptation to align with evolving business goals and market conditions, highlighting its role as a key driver in organizational success and sustainability.
Keywords: Strategic planning, fiscal budget, headcount planning, resource allocation, financial management, decision-making, operational efficiency, risk management, headcount budget.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158298 Toward Indoor and Outdoor Surveillance Using an Improved Fast Background Subtraction Algorithm
Authors: A. El Harraj, N. Raissouni
Abstract:
The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes invariance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.
Keywords: Video surveillance, background subtraction, Contrast Limited Histogram Equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2087297 Mapping SOA and Outsourcing on NEBIC: A Dynamic Capabilities Perspective Approach
Authors: Benazeer Md. Shahzada, Verelst Jan, Van Grembergen Wim, Mannaert Herwig
Abstract:
This article is an extension and a practical application approach of Wheeler-s NEBIC theory (Net Enabled Business Innovation Cycle). NEBIC theory is a new approach in IS research and can be used for dynamic environment related to new technology. Firms can follow the market changes rapidly with support of the IT resources. Flexible firms adapt their market strategies, and respond more quickly to customers changing behaviors. When every leading firm in an industry has access to the same IT resources, the way that these IT resources are managed will determine the competitive advantages or disadvantages of firm. From Dynamic Capabilities Perspective and from newly introduced NEBIC theory by Wheeler, we know that only IT resources cannot deliver customer value but good configuration of those resources can guarantee customer value by choosing the right emerging technology, grasping the economic opportunities through business innovation and growth. We found evidences in literature that SOA (Service Oriented Architecture) is a promising emerging technology which can deliver the desired economic opportunity through modularity, flexibility and loosecoupling. SOA can also help firms to connect in network which can open a new window of opportunity to collaborate in innovation and right kind of outsourcingKeywords: Absorptive capacity, Dynamic Capability, Netenabled business innovation cycle, Service oriented architecture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785296 Geochemistry of Natural Radionuclides Associated with Acid Mine Drainage (AMD) in a Coal Mining Area in Southern Brazil
Authors: Juliana A. Galhardi, Daniel M. Bonotto
Abstract:
Coal is an important non-renewable energy source of and can be associated with radioactive elements. In Figueira city, Paraná state, Brazil, it was recorded high uranium activity near the coal mine that supplies a local thermoelectric power plant. In this context, the radon activity (Rn-222, produced by the Ra-226 decay in the U-238 natural series) was evaluated in groundwater, river water and effluents produced from the acid mine drainage in the coal reject dumps. The samples were collected in August 2013 and in February 2014 and analyzed at LABIDRO (Laboratory of Isotope and Hydrochemistry), UNESP, Rio Claro city, Brazil, using an alpha spectrometer (AlphaGuard) adjusted to evaluate the mean radon activity concentration in five cycles of 10 minutes. No radon activity concentration above 100 Bq.L-1, which was a previous critic value established by the World Health Organization. The average radon activity concentration in groundwater was higher than in surface water and in effluent samples, possibly due to the accumulation of uranium and radium in the aquifer layers that favors the radon trapping. The lower value in the river waters can indicate dilution and the intermediate value in the effluents may indicate radon absorption in the coal particles of the reject dumps. The results also indicate that the radon activities in the effluents increase with the sample acidification, possibly due to the higher radium leaching and the subsequent radon transport to the drainage flow. The water samples of Laranjinha River and Ribeirão das Pedras stream, which, respectively, supply Figueira city and receive the mining effluent, exhibited higher pH values upstream the mine, reflecting the acid mine drainage discharge. The radionuclides transport indicates the importance of monitoring their activity concentration in natural waters due to the risks that the radioactivity can represent to human health.Keywords: Radon, radium, acid mine drainage, coal
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049295 Load Balancing in Heterogeneous P2P Systems using Mobile Agents
Authors: Neeraj Nehra, R. B. Patel, V. K. Bhat
Abstract:
Use of the Internet and the World-Wide-Web (WWW) has become widespread in recent years and mobile agent technology has proliferated at an equally rapid rate. In this scenario load balancing becomes important for P2P systems. Beside P2P systems can be highly heterogeneous, i.e., they may consists of peers that range from old desktops to powerful servers connected to internet through high-bandwidth lines. There are various loads balancing policies came into picture. Primitive one is Message Passing Interface (MPI). Its wide availability and portability make it an attractive choice; however the communication requirements are sometimes inefficient when implementing the primitives provided by MPI. In this scenario we use the concept of mobile agent because Mobile agent (MA) based approach have the merits of high flexibility, efficiency, low network traffic, less communication latency as well as highly asynchronous. In this study we present decentralized load balancing scheme using mobile agent technology in which when a node is overloaded, task migrates to less utilized nodes so as to share the workload. However, the decision of which nodes receive migrating task is made in real-time by defining certain load balancing policies. These policies are executed on PMADE (A Platform for Mobile Agent Distribution and Execution) in decentralized manner using JuxtaNet and various load balancing metrics are discussed.Keywords: Mobile Agents, Agent host, Agent Submitter, PMADE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743294 In Cognitive Radio the Analysis of Bit-Error- Rate (BER) by using PSO Algorithm
Authors: Shrikrishan Yadav, Akhilesh Saini, Krishna Chandra Roy
Abstract:
The electromagnetic spectrum is a natural resource and hence well-organized usage of the limited natural resources is the necessities for better communication. The present static frequency allocation schemes cannot accommodate demands of the rapidly increasing number of higher data rate services. Therefore, dynamic usage of the spectrum must be distinguished from the static usage to increase the availability of frequency spectrum. Cognitive radio is not a single piece of apparatus but it is a technology that can incorporate components spread across a network. It offers great promise for improving system efficiency, spectrum utilization, more effective applications, reduction in interference and reduced complexity of usage for users. Cognitive radio is aware of its environmental, internal state, and location, and autonomously adjusts its operations to achieve designed objectives. It first senses its spectral environment over a wide frequency band, and then adapts the parameters to maximize spectrum efficiency with high performance. This paper only focuses on the analysis of Bit-Error-Rate in cognitive radio by using Particle Swarm Optimization Algorithm. It is theoretically as well as practically analyzed and interpreted in the sense of advantages and drawbacks and how BER affects the efficiency and performance of the communication system.Keywords: BER, Cognitive Radio, Environmental Parameters, PSO, Radio spectrum, Transmission Parameters
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2155293 Gaits Stability Analysis for a Pneumatic Quadruped Robot Using Reinforcement Learning
Authors: Soofiyan Atar, Adil Shaikh, Sahil Rajpurkar, Pragnesh Bhalala, Aniket Desai, Irfan Siddavatam
Abstract:
Deep reinforcement learning (deep RL) algorithms leverage the symbolic power of complex controllers by automating it by mapping sensory inputs to low-level actions. Deep RL eliminates the complex robot dynamics with minimal engineering. Deep RL provides high-risk involvement by directly implementing it in real-world scenarios and also high sensitivity towards hyperparameters. Tuning of hyperparameters on a pneumatic quadruped robot becomes very expensive through trial-and-error learning. This paper presents an automated learning control for a pneumatic quadruped robot using sample efficient deep Q learning, enabling minimal tuning and very few trials to learn the neural network. Long training hours may degrade the pneumatic cylinder due to jerk actions originated through stochastic weights. We applied this method to the pneumatic quadruped robot, which resulted in a hopping gait. In our process, we eliminated the use of a simulator and acquired a stable gait. This approach evolves so that the resultant gait matures more sturdy towards any stochastic changes in the environment. We further show that our algorithm performed very well as compared to programmed gait using robot dynamics.
Keywords: model-based reinforcement learning, gait stability, supervised learning, pneumatic quadruped
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 588292 The Phatic Function and the Socializing Element of Personal Blogs
Authors: Emelia Noronha, Milind Malshe
Abstract:
The phatic function of communication is a vital element of any conversation. This research paper looks into this function with respect to personal blogs maintained by Indian bloggers. This paper is a study into the phenomenon of phatic communication maintained by bloggers through their blogs. Based on a linguistic analysis of the posts of twenty eight Indian bloggers, writing in English, studied over a period of three years, the study indicates that though the blogging phenomenon is not conversational in the same manner as face-to-face communication, it does make ample provision for feedback that is conversational in nature. Ordinary day to day offline conversations use conventionalized phatic utterances; those on the social media are in a perpetual mode of innovation and experimentation in order to sustain contact with its readers. These innovative methods and means are the focus of this study. Though the personal blogger aims to chronicle his/her personal life through the blog, the socializing function is crucial to these bloggers. In comparison to the western personal blogs which focus on the presentation of the ‘bounded individual self’, we find Indian personal bloggers engage in the presentation of their ‘social selves’. These bloggers yearn to reach out to the readers on the internet and the phatic function serves to initiate, sustain and renew social ties on the blogosphere thereby consolidating the social network of readers and bloggers.Keywords: Personal blogs, phatic, social-selves, blog readers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940291 Suitable Partner Node Selection and Resource Allocation in Cooperative Wireless Communication Using the Trade-Off Game
Authors: Oluseye A. Adeleke, Mohd. F. M. Salleh
Abstract:
The performance of any cooperative communication system depends largely on the selection of a proper partner. Another important factor to consider is an efficient allocation of resource like power by the source node to help it in forwarding information to the destination. In this paper, we look at the concepts of partner selection and resource (power) allocation for a distributed communication network. A type of non-cooperative game referred to as Trade-Off game is employed so as to jointly consider the utilities of the source and relay nodes, where in this case, the source is the node that requires help with forwarding of its information while the partner is the node that is willing to help in forwarding the source node’s information, but at a price. The approach enables the source node to maximize its utility by selecting a partner node based on (i) the proximity of the partner node to the source and destination nodes, and (ii) the price the partner node will charge for the help being rendered. Our proposed scheme helps the source locate and select the relay nodes at ‘better’ locations and purchase power optimally from them. It also aids the contending relay nodes maximize their own utilities as well by asking proper prices. Our game scheme is seen to converge to unique equilibrium.
Keywords: Cooperative communication, game theory, node, power allocation, trade-off, utility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1938