Search results for: Bayesian approach Kalman filtering approach
4176 Augmented Reality Sandbox and Constructivist Approach for Geoscience Teaching and Learning
Authors: Muhammad Nawaz, Sandeep N. Kundu, Farha Sattar
Abstract:
Augmented reality sandbox adds new dimensions to education and learning process. It can be a core component of geoscience teaching and learning to understand the geographic contexts and landform processes. Augmented reality sandbox is a useful tool not only to create an interactive learning environment through spatial visualization but also it can provide an active learning experience to students and enhances the cognition process of learning. Augmented reality sandbox can be used as an interactive learning tool to teach geomorphic and landform processes. This article explains the augmented reality sandbox and the constructivism approach for geoscience teaching and learning, and endeavours to explore the ways to teach the geographic processes using the three-dimensional digital environment for the deep learning of the geoscience concepts interactively.
Keywords: Augmented Reality Sandbox, constructivism, deep learning, geoscience.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15224175 Formosa3: A Cloud-Enabled HPC Cluster in NCHC
Authors: Chin-Hung Li, Te-Ming Chen, Ying-Chuan Chen, Shuen-Tai Wang
Abstract:
This paper proposes a new approach to offer a private cloud service in HPC clusters. In particular, our approach relies on automatically scheduling users- customized environment request as a normal job in batch system. After finishing virtualization request jobs, those guest operating systems will dismiss so that compute nodes will be released again for computing. We present initial work on the innovative integration of HPC batch system and virtualization tools that aims at coexistence such that they suffice for meeting the minimizing interference required by a traditional HPC cluster. Given the design of initial infrastructure, the proposed effort has the potential to positively impact on synergy model. The results from the experiment concluded that goal for provisioning customized cluster environment indeed can be fulfilled by using virtual machines, and efficiency can be improved with proper setup and arrangements.Keywords: Cloud Computing, HPC Cluster, Private Cloud, Virtualization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20424174 MITAutomatic ECG Beat Tachycardia Detection Using Artificial Neural Network
Authors: R. Amandi, A. Shahbazi, A. Mohebi, M. Bazargan, Y. Jaberi, P. Emadi, A. Valizade
Abstract:
The application of Neural Network for disease diagnosis has made great progress and is widely used by physicians. An Electrocardiogram carries vital information about heart activity and physicians use this signal for cardiac disease diagnosis which was the great motivation towards our study. In our work, tachycardia features obtained are used for the training and testing of a Neural Network. In this study we are using Fuzzy Probabilistic Neural Networks as an automatic technique for ECG signal analysis. As every real signal recorded by the equipment can have different artifacts, we needed to do some preprocessing steps before feeding it to our system. Wavelet transform is used for extracting the morphological parameters of the ECG signal. The outcome of the approach for the variety of arrhythmias shows the represented approach is superior than prior presented algorithms with an average accuracy of about %95 for more than 7 tachy arrhythmias.Keywords: Fuzzy Logic, Probabilistic Neural Network, Tachycardia, Wavelet Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22904173 Liver Lesion Extraction with Fuzzy Thresholding in Contrast Enhanced Ultrasound Images
Authors: Abder-Rahman Ali, Adélaïde Albouy-Kissi, Manuel Grand-Brochier, Viviane Ladan-Marcus, Christine Hoeffl, Claude Marcus, Antoine Vacavant, Jean-Yves Boire
Abstract:
In this paper, we present a new segmentation approach for focal liver lesions in contrast enhanced ultrasound imaging. This approach, based on a two-cluster Fuzzy C-Means methodology, considers type-II fuzzy sets to handle uncertainty due to the image modality (presence of speckle noise, low contrast, etc.), and to calculate the optimum inter-cluster threshold. Fine boundaries are detected by a local recursive merging of ambiguous pixels. The method has been tested on a representative database. Compared to both Otsu and type-I Fuzzy C-Means techniques, the proposed method significantly reduces the segmentation errors.Keywords: Defuzzification, fuzzy clustering, image segmentation, type-II fuzzy sets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22904172 Learning Undergraduate Mathematics in a Discovery-Enriched Approach
Authors: Kam-moon Liu, Kwok-chi Chim, Kwok-wai Chung, Daniel Wing-cheong Ho
Abstract:
Students often adopt routine practicing as learning strategy for mathematics. The reason is they are often bound and trained to solving conventional-typed questions in Mathematics in high school. This will be problematic if students further consolidate this practice in university. Therefore, the Department of Mathematics emphasized and integrated the Discovery-enriched approach in the undergraduate curriculum. This paper presents the details of implementing the Discovery-enriched Curriculum by providing adequate platform for project-learning, expertise for guidance and internship opportunities for students majoring in Mathematics. The Department also provided project-learning opportunities to mathematics courses targeted for students majoring in other science or engineering disciplines. The outcome is promising: the research ability and problem solving skills of students are enhanced.Keywords: Discovery-enriched curriculum, higher education, mathematics education, project learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18264171 Case Study Approach Using Scenario Analysis to Analyze Unabsorbed Head Office Overheads
Authors: K. C. Iyer, T. Gupta, Y. M. Bindal
Abstract:
Head office overhead (HOOH) is an indirect cost and is recovered through individual project billings by the contractor. Delay in a project impacts the absorption of HOOH cost allocated to that particular project and thus diminishes the expected profit of the contractor. This unabsorbed HOOH cost is later claimed by contractors as damages. The subjective nature of the available formulae to compute unabsorbed HOOH is the difficulty that contractors and owners face and thus dispute it. The paper attempts to bring together the rationale of various HOOH formulae by gathering contractor’s HOOH cost data on all of its project, using case study approach and comparing variations in values of HOOH using scenario analysis. The case study approach uses project data collected from four construction projects of a contractor in India to calculate unabsorbed HOOH costs from various available formulae. Scenario analysis provides further variations in HOOH values after considering two independent situations mainly scope changes and new projects during the delay period. Interestingly, one of the findings in this study reveals that, in spite of HOOH getting absorbed by additional works available during the period of delay, a few formulae depict an increase in the value of unabsorbed HOOH, neglecting any absorption by the increase in scope. This indicates that these formulae are inappropriate for use in case of a change to the scope of work. Results of this study can help both parties in deciding on an appropriate formula more objectively, considering the events on a project causing the delay and contractor's position in respect of obtaining new projects.
Keywords: Absorbed and unabsorbed overheads, head office overheads, scenario analysis, scope variation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8254170 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector
Authors: Mariam Vardiashvili
Abstract:
The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity. When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.
Keywords: Non-cash-generating assets, cash-generating assets, recoverable value, recoverable service amount, value in use.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6984169 Dynamic Threshold Adjustment Approach For Neural Networks
Authors: Hamza A. Ali, Waleed A. J. Rasheed
Abstract:
The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.
Keywords: Classification, Recognition, Neural Networks, Pattern Recognition, Generalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16274168 High Quality Speech Coding using Combined Parametric and Perceptual Modules
Authors: M. Kulesza, G. Szwoch, A. Czyżewski
Abstract:
A novel approach to speech coding using the hybrid architecture is presented. Advantages of parametric and perceptual coding methods are utilized together in order to create a speech coding algorithm assuring better signal quality than in traditional CELP parametric codec. Two approaches are discussed. One is based on selection of voiced signal components that are encoded using parametric algorithm, unvoiced components that are encoded perceptually and transients that remain unencoded. The second approach uses perceptual encoding of the residual signal in CELP codec. The algorithm applied for precise transient selection is described. Signal quality achieved using the proposed hybrid codec is compared to quality of some standard speech codecs.
Keywords: CELP residual coding, hybrid codec architecture, perceptual speech coding, speech codecs comparison.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15304167 Intelligent Agent Communication by Using DAML to Build Agent Community Ontology
Authors: Cheng-Hsiung Hung, Hong-Jie Dai, Jason Jen-Yen Chen
Abstract:
This paper presents a new approach for intelligent agent communication based on ontology for agent community. DARPA agent markup language (DAML) is used to build the community ontology. This paper extends the agent management specification by the foundation for intelligent physical agents (FIPA) to develop an agent role called community facilitator (CF) that manages community directory and community ontology. CF helps build agent community. Precise description of agent service in this community can thus be achieved. This facilitates agent communication. Furthermore, through ontology update, agents with different ontology are capable of communicating with each other. An example of advanced traveler information system is included to illustrate practicality of this approach.
Keywords: Intelligent agent communication, DARPA agent markup language (DAML), Community ontology, Advanced Traveler Information System (ATIS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16004166 Analyzing Periurban Fringe with Rough Set
Authors: Benedetto Manganelli, Beniamino Murgante
Abstract:
The distinction among urban, periurban and rural areas represents a classical example of uncertainty in land classification. Satellite images, geostatistical analysis and all kinds of spatial data are very useful in urban sprawl studies, but it is important to define precise rules in combining great amounts of data to build complex knowledge about territory. Rough Set theory may be a useful method to employ in this field. It represents a different mathematical approach to uncertainty by capturing the indiscernibility. Two different phenomena can be indiscernible in some contexts and classified in the same way when combining available information about them. This approach has been applied in a case of study, comparing the results achieved with both Map Algebra technique and Spatial Rough Set. The study case area, Potenza Province, is particularly suitable for the application of this theory, because it includes 100 municipalities with different number of inhabitants and morphologic features.
Keywords: Land Classification, Map Algebra, Periurban Fringe, Rough Set, Urban Planning, Urban Sprawl.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17244165 Hybrid Method Using Wavelets and Predictive Method for Compression of Speech Signal
Authors: Karima Siham Aoubid, Mohamed Boulemden
Abstract:
The development of the signal compression algorithms is having compressive progress. These algorithms are continuously improved by new tools and aim to reduce, an average, the number of bits necessary to the signal representation by means of minimizing the reconstruction error. The following article proposes the compression of Arabic speech signal by a hybrid method combining the wavelet transform and the linear prediction. The adopted approach rests, on one hand, on the original signal decomposition by ways of analysis filters, which is followed by the compression stage, and on the other hand, on the application of the order 5, as well as, the compression signal coefficients. The aim of this approach is the estimation of the predicted error, which will be coded and transmitted. The decoding operation is then used to reconstitute the original signal. Thus, the adequate choice of the bench of filters is useful to the transform in necessary to increase the compression rate and induce an impercevable distortion from an auditive point of view.Keywords: Compression, linear prediction analysis, multiresolution analysis, speech signal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13374164 Physical Modeling of Oil Well Fire Extinguishing Using a Turbojet on a Barge
Authors: M. Abbaspour, D. Mansouri, N. Mansouri
Abstract:
There are reports of gas and oil wells fire due to different accidents. Many different methods are used for fire fighting in gas and oil industry. Traditional fire extinguishing techniques are mostly faced with many problems and are usually time consuming and needs lots of equipments. Besides, they cause damages to facilities, and create health and environmental problems. This article proposes innovative approach in fire extinguishing techniques in oil and gas industry, especially applicable for burning oil wells located offshore. Fire extinguishment employing a turbojet is a novel approach which can help to extinguishment the fire in short period of time. Divergent and convergent turbojets modeled in laboratory scale along with a high pressure flame were used. Different experiments were conducted to determine the relationship between output discharges of trumpet and oil wells. The results were corrected and the relationship between dimensionless parameters of flame and fire extinguishment distances and also the output discharge of turbojet and oil wells in specified distances are demonstrated by specific curves.
Keywords: Burning well, fire extinguishment, gas/oil industry, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16394163 An Approach for Blind Source Separation using the Sliding DFT and Time Domain Independent Component Analysis
Authors: Koji Yamanouchi, Masaru Fujieda, Takahiro Murakami, Yoshihisa Ishida
Abstract:
''Cocktail party problem'' is well known as one of the human auditory abilities. We can recognize the specific sound that we want to listen by this ability even if a lot of undesirable sounds or noises are mixed. Blind source separation (BSS) based on independent component analysis (ICA) is one of the methods by which we can separate only a special signal from their mixed signals with simple hypothesis. In this paper, we propose an online approach for blind source separation using the sliding DFT and the time domain independent component analysis. The proposed method can reduce calculation complexity in comparison with conventional methods, and can be applied to parallel processing by using digital signal processors (DSPs) and so on. We evaluate this method and show its availability.Keywords: Cocktail party problem, blind Source Separation(BSS), independent component analysis, sliding DFT, onlineprocessing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16384162 Energy Map Construction using Adaptive Alpha Grey Prediction Model in WSNs
Authors: Surender Kumar Soni, Dhirendra Pratap Singh
Abstract:
Wireless Sensor Networks can be used to monitor the physical phenomenon in such areas where human approach is nearly impossible. Hence the limited power supply is the major constraint of the WSNs due to the use of non-rechargeable batteries in sensor nodes. A lot of researches are going on to reduce the energy consumption of sensor nodes. Energy map can be used with clustering, data dissemination and routing techniques to reduce the power consumption of WSNs. Energy map can also be used to know which part of the network is going to fail in near future. In this paper, Energy map is constructed using the prediction based approach. Adaptive alpha GM(1,1) model is used as the prediction model. GM(1,1) is being used worldwide in many applications for predicting future values of time series using some past values due to its high computational efficiency and accuracy.Keywords: Adaptive Alpha GM(1, 1) Model, Energy Map, Prediction Based Data Reduction, Wireless Sensor Networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18014161 Prioritization of Mutation Test Generation with Centrality Measure
Authors: Supachai Supmak, Yachai Limpiyakorn
Abstract:
Mutation testing can be applied for the quality assessment of test cases. Prioritization of mutation test generation has been a critical element of the industry practice that would contribute to the evaluation of test cases. The industry generally delivers the product under the condition of time to the market and thus, inevitably sacrifices software testing tasks, even though many test cases are required for software verification. This paper presents an approach of applying a social network centrality measure, PageRank, to prioritize mutation test generation. The source code with the highest values of PageRank, will be focused first when developing their test cases as these modules are vulnerable for defects or anomalies which may cause the consequent defects in many other associated modules. Moreover, the approach would help identify the reducible test cases in the test suite, still maintaining the same criteria as the original number of test cases.
Keywords: Software testing, mutation test, network centrality measure, test case prioritization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5434160 A Discrete-Event-Simulation Approach for Logistic Systems with Real Time Resource Routing and VR Integration
Authors: Gerrit Alves, Jürgen Roßmann, Roland Wischnewski
Abstract:
Today, transport and logistic systems are often tightly integrated in the production. Lean production and just-in-time delivering create multiple constraints that have to be fulfilled. As transport networks often have evolved over time they are very expensive to change. This paper describes a discrete-event-simulation system which simulates transportation models using real time resource routing and collision avoidance. It allows for the specification of own control algorithms and validation of new strategies. The simulation is integrated into a virtual reality (VR) environment and can be displayed in 3-D to show the progress. Simulation elements can be selected through VR metaphors. All data gathered during the simulation can be presented as a detailed summary afterwards. The included cost-benefit calculation can help to optimize the financial outcome. The operation of this approach is shown by the example of a timber harvest simulation.Keywords: Discrete-Event-Simulation, Logistic, Simulation, Virtual Reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18814159 A Graph Theoretic Approach for Quantitative Evaluation of NAAC Accreditation Criteria for the Indian University
Authors: Nameesh Miglani, Rajeev Saha, R. S. Parihar
Abstract:
Estimation of the quality regarding higher education within a university is practically long drawn process besides being difficult to measure primarily due to lack of a standard scale. National Assessment and Accreditation Council (NAAC) evolved a methodology of assessment which involves self-appraisal by each university/college and an assessment of performance by an expert committee. The attributes involved in assessing a university may not be totally independent from each other thereby necessitating the consideration of interdependencies. The present study focuses on evaluation of assessment criteria using graph theoretic approach and fuzzy treatment of data collected from the students. The technique will provide a suitable platform to university management team to cross check assessment of education quality by considering interdependencies of the attributes using graph theory.
Keywords: Graph theory, NAAC accreditation criteria, Indian University accreditation process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11274158 An Approach for Transient Response Calculation of large Nonproportionally Damped Structures using Component Mode Synthesis
Authors: Alexander A. Muravyov
Abstract:
A minimal complexity version of component mode synthesis is presented that requires simplified computer programming, but still provides adequate accuracy for modeling lower eigenproperties of large structures and their transient responses. The novelty is that a structural separation into components is done along a plane/surface that exhibits rigid-like behavior, thus only normal modes of each component is sufficient to use, without computing any constraint, attachment, or residual-attachment modes. The approach requires only such input information as a few (lower) natural frequencies and corresponding undamped normal modes of each component. A novel technique is shown for formulation of equations of motion, where a double transformation to generalized coordinates is employed and formulation of nonproportional damping matrix in generalized coordinates is shown.Keywords: component mode synthesis, finite element models, transient response, nonproportional damping
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18054157 Contemplating Preference Ratings of Corporate Social Responsibility Practices for Supply Chain Performance System Implementation
Authors: Mohit Tyagi, Pradeep Kumar
Abstract:
The objective of this research work is to identify and analyze the significant corporate social responsibility (CSR) practices with an aim to improve the supply chain performance of automobile industry located at National Capital Region (NCR) of India. To achieve the objective, 6 CSR practices have been considered and analyzed using expert’s preference rating (EPR) approach. The considered CSR practices are namely, Top management and employee awareness about CSR (P1), Employee involvement in social and environmental problems (P2), Protection of human rights (P3), Waste reduction, energy saving and water conservation (P4), Proper visibility of CSR guidelines (P5) and Broad perception towards CSR initiatives (P6). The outcomes of this research may help mangers in decision making processes and framing polices for SCP implementation under CSR context.
Keywords: Supply chain performance, corporate social responsibility, CSR practices, expert’s preference rating approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8334156 Applying Case-Based Reasoning in Supporting Strategy Decisions
Authors: S. M. Seyedhosseini, A. Makui, M. Ghadami
Abstract:
Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.
Keywords: Case based reasoning, Genetic algorithm, Groupdecision making, Product management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21744155 Predictive Clustering Hybrid Regression(pCHR) Approach and Its Application to Sucrose-Based Biohydrogen Production
Authors: Nikhil, Ari Visa, Chin-Chao Chen, Chiu-Yue Lin, Jaakko A. Puhakka, Olli Yli-Harja
Abstract:
A predictive clustering hybrid regression (pCHR) approach was developed and evaluated using dataset from H2- producing sucrose-based bioreactor operated for 15 months. The aim was to model and predict the H2-production rate using information available about envirome and metabolome of the bioprocess. Selforganizing maps (SOM) and Sammon map were used to visualize the dataset and to identify main metabolic patterns and clusters in bioprocess data. Three metabolic clusters: acetate coupled with other metabolites, butyrate only, and transition phases were detected. The developed pCHR model combines principles of k-means clustering, kNN classification and regression techniques. The model performed well in modeling and predicting the H2-production rate with mean square error values of 0.0014 and 0.0032, respectively.Keywords: Biohydrogen, bioprocess modeling, clusteringhybrid regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17774154 Managing a Cross-Disciplinary Research Project in a University: The Case of LEARNIT
Authors: Yulia Stukalina
Abstract:
This paper explores the main issues related to implementing a cross-disciplinary research project (LEARNIT) based on collaboration between universities from three European countries. The paper discusses the importance of using the holistic approach to managing scientific projects with due account for the complicated nature of the educational environment of a modern university. To illustrate this approach, the author describes some actions to be taken for supporting different focus areas of LEARNIT project, in the process using integrated tangible, non-tangible, and semi-tangible resources of the partner university. The methodology of the paper is based on the academic literature and research papers analysis within management discipline. The analysis reported in the paper is also based on the author’s professional experience in the area of managing international research projects in a university.
Keywords: LEARNIT, focus area, project management, resources.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14164153 Community Perceptions and Attitudes Regarding Wildlife Crime in South Africa
Authors: Louiza C. Duncker, Duarte Gonçalves
Abstract:
Wildlife crime is a complex problem with many interconnected facets, which are generally responded to in parts or fragments in efforts to “break down” the complexity into manageable components. However, fragmentation increases complexity as coherence and cooperation become diluted. A whole-of-society approach has been developed towards finding a common goal and integrated approach to preventing wildlife crime. As part of this development, research was conducted in rural communities adjacent to conservation areas in South Africa to define and comprehend the challenges faced by them, and to understand their perceptions of wildlife crime. The results of the research showed that the perceptions of community members varied - most were in favor of conservation and of protecting rhinos, only if they derive adequate benefit from it. Regardless of gender, income level, education level, or access to services, conservation was perceived to be good and bad by the same people. Even though people in the communities are poor, a willingness to stop rhino poaching does exist amongst them, but their perception of parks not caring about people triggered an attitude of not being willing to stop, prevent or report poaching. Understanding the nuances, the history, the interests and values of community members, and the drivers behind poaching mind-sets (intrinsic or driven by transnational organized crime) is imperative to create sustainable and resilient communities on multiple levels that make a substantial positive impact on people’s lives, but also conserve wildlife for posterity.
Keywords: Conservation, community perceptions, wildlife crime, rhino poaching, interest and value creation, whole-of-society approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18794152 Modular Data and Calculation Framework for a Technology-Based Mapping of the Manufacturing Process According to the Value Stream Management Approach
Authors: Tim Wollert, Fabian Behrendt
Abstract:
Value Stream Management (VSM) is a widely used methodology in the context of Lean Management for improving end-to-end material and information flows from a supplier to a customer from a company’s perspective. Whereas the design principles, e.g. Pull, value-adding, customer-orientation and further ones are still valid against the background of an increasing digitalized and dynamic environment, the methodology itself for mapping a value stream is characterized as time- and resource-intensive due to the high degree of manual activities. The digitalization of processes in the context of Industry 4.0 enables new opportunities to reduce these manual efforts and make the VSM approach more agile. The paper at hand aims at providing a modular data and calculation framework, utilizing the available business data, provided by information and communication technologies for automizing the value stream mapping process with focus on the manufacturing process.
Keywords: Industry 4.0, lean management 4.0, value stream management 4.0, value stream mapping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3694151 Clustering based Voltage Control Areas for Localized Reactive Power Management in Deregulated Power System
Authors: Saran Satsangi, Ashish Saini, Amit Saraswat
Abstract:
In this paper, a new K-means clustering based approach for identification of voltage control areas is developed. Voltage control areas are important for efficient reactive power management in power systems operating under deregulated environment. Although, voltage control areas are formed using conventional hierarchical clustering based method, but the present paper investigate the capability of K-means clustering for the purpose of forming voltage control areas. The proposed method is tested and compared for IEEE 14 bus and IEEE 30 bus systems. The results show that this K-means based method is competing with conventional hierarchical approachKeywords: Voltage control areas, reactive power management, K-means clustering algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23994150 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration
Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith
Abstract:
Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.Keywords: Multimodal image registration, GAN, cycle consistency, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8104149 An Impulse-Momentum Approach to Swing-Up Control of Double Inverted Pendulum on a Cart
Authors: Thamer Ali Albahkali
Abstract:
The challenge in the swing-up problem of double inverted pendulum on a cart (DIPC) is to design a controller that bring all DIPC's states, especially the joint angles of the two links, into the region of attraction of the desired equilibrium. This paper proposes a new method to swing-up DIPC based on a series of restto- rest maneuvers of the first link about its vertically upright configuration while holding the cart fixed at the origin. The rest-torest maneuvers are designed such that each one results in a net gain in energy of the second link. This results in swing-up of DIPC-s configuration to the region of attraction of the desired equilibrium. A three-step algorithm is provided for swing-up control followed by the stabilization step. Simulation results with a comparison to an experimental work done in the literature are presented to demonstrate the efficacy of the approach.Keywords: Double Inverted pendulum, Impulse, momentum, underactuated
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19424148 Boosting Method for Automated Feature Space Discovery in Supervised Quantum Machine Learning Models
Authors: Vladimir Rastunkov, Jae-Eun Park, Abhijit Mitra, Brian Quanz, Steve Wood, Christopher Codella, Heather Higgins, Joseph Broz
Abstract:
Quantum Support Vector Machines (QSVM) have become an important tool in research and applications of quantum kernel methods. In this work we propose a boosting approach for building ensembles of QSVM models and assess performance improvement across multiple datasets. This approach is derived from the best ensemble building practices that worked well in traditional machine learning and thus should push the limits of quantum model performance even further. We find that in some cases, a single QSVM model with tuned hyperparameters is sufficient to simulate the data, while in others - an ensemble of QSVMs that are forced to do exploration of the feature space via proposed method is beneficial.
Keywords: QSVM, Quantum Support Vector Machines, quantum kernel, boosting, ensemble.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4394147 A Dose Distribution Approach Using Monte Carlo Simulation in Dosimetric Accuracy Calculation for Treating the Lung Tumor
Authors: Md Abdullah Al Mashud, M. Tariquzzaman, M. Jahangir Alam, Tapan Kumar Godder, M. Mahbubur Rahman
Abstract:
This paper presents a Monte Carlo (MC) method-based dose distributions on lung tumor for 6 MV photon beam to improve the dosimetric accuracy for cancer treatment. The polystyrene which is tissue equivalent material to the lung tumor density is used in this research. In the empirical calculations, TRS-398 formalism of IAEA has been used, and the setup was made according to the ICRU recommendations. The research outcomes were compared with the state-of-the-art experimental results. From the experimental results, it is observed that the proposed based approach provides more accurate results and improves the accuracy than the existing approaches. The average %variation between measured and TPS simulated values was obtained 1.337±0.531, which shows a substantial improvement comparing with the state-of-the-art technology.
Keywords: Lung tumor, Monte Carlo, polystyrene, elekta synergy, Monaco Planning System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1242