Search results for: machine tools
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2328

Search results for: machine tools

78 TheAnalyzer: Clustering-Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human-Computer Interaction

Authors: D. S. A. Nanayakkara, K. J. P. G. Perera

Abstract:

E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. TheAnalyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling TheAnalyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.

Keywords: Data clustering, data standardization, dimensionality reduction, human-computer interaction, user profiling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 237
77 Potential of High Performance Ring Spinning Based on Superconducting Magnetic Bearing

Authors: M. Hossain, A. Abdkader, C. Cherif, A. Berger, M. Sparing, R. Hühne, L. Schultz, K. Nielsch

Abstract:

Due to the best quality of yarn and the flexibility of the machine, the ring spinning process is the most widely used spinning method for short staple yarn production. However, the productivity of these machines is still much lower in comparison to other spinning systems such as rotor or air-jet spinning process. The main reason for this limitation lies on the twisting mechanism of the ring spinning process. In the ring/traveler twisting system, each rotation of the traveler along with the ring inserts twist in the yarn. The rotation of the traveler at higher speed includes strong frictional forces, which in turn generates heat. Different ring/traveler systems concerning with its geometries, material combinations and coatings have already been implemented to solve the frictional problem. However, such developments can neither completely solve the frictional problem nor increase the productivity. The friction free superconducting magnetic bearing (SMB) system can be a right alternative replacing the existing ring/traveler system. The unique concept of SMB bearings is that they possess a self-stabilizing behavior, i.e. they remain fully passive without any necessity for expensive position sensing and control. Within the framework of a research project funded by German research foundation (DFG), suitable concepts of the SMB-system have been designed, developed, and integrated as a twisting device of ring spinning replacing the existing ring/traveler system. With the help of the developed mathematical model and experimental investigation, the physical limitations of this innovative twisting device in the spinning process have been determined. The interaction among the parameters of the spinning process and the superconducting twisting element has been further evaluated, which derives the concrete information regarding the new spinning process. Moreover, the influence of the implemented SMB twisting system on the yarn quality has been analyzed with respect to different process parameters. The presented work reveals the enormous potential of the innovative twisting mechanism, so that the productivity of the ring spinning process especially in case of thermoplastic materials can be at least doubled for the first time in a hundred years. The SMB ring spinning tester has also been presented in the international fair “International Textile Machinery Association (ITMA) 2015”.

Keywords: Ring spinning, superconducting magnetic bearing, yarn properties, productivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
76 The Performance Analysis of Valveless Micropump with Contoured Nozzle/Diffuser

Authors: Cheng-Chung Yang, Jr-Ming Miao, Fuh-Lin Lih, Tsung-Lung Liu, Ming-Hui Ho

Abstract:

The operation performance of a valveless micro-pump is strongly dependent on the shape of connected nozzle/diffuser and Reynolds number. The aims of present work are to compare the performance curves of micropump with the original straight nozzle/diffuser and contoured nozzle/diffuser under different back pressure conditions. The tested valveless micropumps are assembled of five pieces of patterned PMMA plates with hot-embracing technique. The structures of central chamber, the inlet/outlet reservoirs and the connected nozzle/diffuser are fabricated with laser cutting machine. The micropump is actuated with circular-type PZT film embraced on the bottom of central chamber. The deformation of PZT membrane with various input voltages is measured with a displacement laser probe. A simple testing facility is also constructed to evaluate the performance curves for comparison. In order to observe the evaluation of low Reynolds number multiple vortex flow patterns within the micropump during suction and pumping modes, the unsteady, incompressible laminar three-dimensional Reynolds-averaged Navier-Stokes equations are solved. The working fluid is DI water with constant thermo-physical properties. The oscillating behavior of PZT film is modeled with the moving boundary wall in way of UDF program. With the dynamic mesh method, the instants pressure and velocity fields are obtained and discussed.Results indicated that the volume flow rate is not monotony increased with the oscillating frequency of PZT film, regardless of the shapes of nozzle/diffuser. The present micropump can generate the maximum volume flow rate of 13.53 ml/min when the operation frequency is 64Hz and the input voltage is 140 volts. The micropump with contoured nozzle/diffuser can provide 7ml/min flow rate even when the back pressure is up to 400 mm-H2O. CFD results revealed that the flow central chamber was occupied with multiple pairs of counter-rotating vortices during suction and pumping modes. The net volume flow rate over a complete oscillating periodic of PZT

Keywords: valveless micropump、PZT diagraph、contoured nozzle/diffuser、vortex flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2857
75 Application of Recycled Tungsten Carbide Powder for Fabrication of Iron Based Powder Metallurgy Alloy

Authors: Yukinori Taniguchi, Kazuyoshi Kurita, Kohei Mizuta, Keigo Nishitani, Ryuichi Fukuda

Abstract:

Tungsten carbide is widely used as a tool material in metal manufacturing process. Since tungsten is typical rare metal, establishment of recycle process of tungsten carbide tools and restore into cemented carbide material bring great impact to metal manufacturing industry. Recently, recycle process of tungsten carbide has been developed and established gradually. However, the demands for quality of cemented carbide tool are quite severe because hardness, toughness, anti-wear ability, heat resistance, fatigue strength and so on should be guaranteed for precision machining and tool life. Currently, it is hard to restore the recycled tungsten carbide powder entirely as raw material for new processed cemented carbide tool. In this study, to suggest positive use of recycled tungsten carbide powder, we have tried to fabricate a carbon based sintered steel which shows reinforced mechanical properties with recycled tungsten carbide powder. We have made set of newly designed sintered steels. Compression test of sintered specimen in density ratio of 0.85 (which means 15% porosity inside) has been conducted. As results, at least 1.7 times higher in nominal strength in the amount of 7.0 wt.% was shown in recycled WC powder. The strength reached to over 600 MPa for the Fe-WC-Co-Cu sintered alloy. Wear test has been conducted by using ball-on-disk type friction tester using 5 mm diameter ball with normal force of 2 N in the dry conditions. Wear amount after 1,000 m running distance shows that about 1.5 times longer life was shown in designed sintered alloy. Since results of tensile test showed that same tendency in previous testing, it is concluded that designed sintered alloy can be used for several mechanical parts with special strength and anti-wear ability in relatively low cost due to recycled tungsten carbide powder.

Keywords: Tungsten carbide, recycle process, compression test, powder metallurgy, anti-wear ability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1480
74 Dynamic Analysis of Reduced Order Large Rotating Vibro-Impact Systems

Authors: Miroslav Byrtus

Abstract:

Large rotating systems, especially gear drives and gearboxes, occur as parts of many mechanical devices transmitting the torque with relatively small loss of power. With the increased demand for high speed machinery, mathematical modeling and dynamic analysis of gear drives gained importance. Mathematical description of such mechanical systems is a complex task evolving for several decades. In gear drive dynamic models, which include flexible shafts, bearings and gearing and use the finite elements, nonlinear effects due to gear mesh and bearings are usually ignored, for such models have large number of degrees of freedom (DOF) and it is computationally expensive to analyze nonlinear systems with large number of DOF. Therefore, these models are not suitable for simulation of nonlinear behavior with amplitude jumps in frequency response. The contribution uses a methodology of nonlinear large rotating system modeling which is based on degrees of freedom (DOF) number reduction using modal synthesis method (MSM). The MSM enables significant DOF number reduction while keeping the nonlinear behavior of the system in a specific frequency range. Further, the MSM with DOF number reduction is suitable for including detail models of nonlinear couplings (mainly gear and bearing couplings) into the complete gear drive models. Since each subsystem is modeled separately using different FEM systems, it is advantageous to parameterize models of subsystems and to use the parameterization for optimization of chosen design parameters. Final complex model of gear drive is assembled in MATLAB and MATLAB tools are used for dynamical analysis of the nonlinear system. The contribution is further focused on developing of a methodology for investigation of behavior of the system by Nonlinear Normal Modes with combination of the MSM using numerical continuation method. The proposed methodology will be tested using a two-stage gearbox including its housing.

Keywords: Vibro-impact system, rotating system, gear drive, modal synthesis method, numerical continuation method, periodic solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2404
73 The Role of Acoustical Design within Architectural Design in the Early Design Phase

Authors: O. Wright, N. Perkins, M. Donn, M. Halstead

Abstract:

This research responded to anecdotal evidence that suggested inefficiencies within the Architect and Acoustician relationship may lead to ineffective acoustic design decisions.  The acoustician spoken to believed that he was approached too late in the design phase. The approached architect valued acoustical qualities, yet, struggled to interpret common measurement parameters. The preliminary investigation of these opinions indicated a gap in the current New Zealand Architectural discourse and currently informs the creation of a 2016 Master of Architecture (Prof) thesis research. Little meaningful information about acoustic intervention in the early design phase could be found from past literature. In the information that was sourced, authors focus on software as an incorporation tool without investigating why the flaws in the relationship originally exist. To further explore this relationship, a survey was designed. It underwent three phases to ensure its consistency, and was delivered to a group of 51 acousticians from one international Acoustics company. The results were then separated between New Zealand and off-shore to identify trends. The survey results suggest that 75% of acousticians meet the architect less than 5 times per project. Instead of regular contact, a mediated method is adopted though a mix of telecommunication and written reports. Acousticians tend to be introduced later into New Zealand building project than the corresponding off-shore building. This delay corresponds to an increase in remedial action for each of the building types in the survey except Auditoria and Office Buildings. 31 participants have had their specifications challenged by an architect. Furthermore, 71% of the acousticians believe that architects do not have the knowledge to understand why the acoustic specifications are in place. The issues raised in this investigation align to the colloquial evidence expressed by the two consultants. It identifies a larger gap in the industry were acoustics is remedially treated rather than identified as a possible design driver. Further research through design is suggested to understand the role of acoustics within architectural design and potential tools for its inclusion during, not after, the design process.

Keywords: Architectural acoustics, early-design, interdisciplinary communication, remedial response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
72 Transcriptomics Analysis on Comparing Non-Small Cell Lung Cancer versus Normal Lung, and Early Stage Compared versus Late-Stages of Non-Small Cell Lung Cancer

Authors: Achitphol Chookaew, Paramee Thongsukhsai, Patamarerk Engsontia, Narongwit Nakwan, Pritsana Raugrut

Abstract:

Lung cancer is one of the most common malignancies and primary cause of death due to cancer worldwide. Non-small cell lung cancer (NSCLC) is the main subtype in which majority of patients present with advanced-stage disease. Herein, we analyzed differentially expressed genes to find potential biomarkers for lung cancer diagnosis as well as prognostic markers. We used transcriptome data from our 2 NSCLC patients and public data (GSE81089) composing of 8 NSCLC and 10 normal lung tissues. Differentially expressed genes (DEGs) between NSCLC and normal tissue and between early-stage and late-stage NSCLC were analyzed by the DESeq2. Pairwise correlation was used to find the DEGs with false discovery rate (FDR) adjusted p-value £ 0.05 and |log2 fold change| ³ 4 for NSCLC versus normal and FDR adjusted p-value £ 0.05 with |log2 fold change| ³ 2 for early versus late-stage NSCLC. Bioinformatic tools were used for functional and pathway analysis. Moreover, the top ten genes in each comparison group were verified the expression and survival analysis via GEPIA. We found 150 up-regulated and 45 down-regulated genes in NSCLC compared to normal tissues. Many immnunoglobulin-related genes e.g., IGHV4-4, IGHV5-10-1, IGHV4-31, IGHV4-61, and IGHV1-69D were significantly up-regulated. 22 genes were up-regulated, and five genes were down-regulated in late-stage compared to early-stage NSCLC. The top five DEGs genes were KRT6B, SPRR1A, KRT13, KRT6A and KRT5. Keratin 6B (KRT6B) was the most significantly increased gene in the late-stage NSCLC. From GEPIA analysis, we concluded that IGHV4-31 and IGKV1-9 might be used as diagnostic biomarkers, while KRT6B and KRT6A might be used as prognostic biomarkers. However, further clinical validation is needed.

Keywords: Bioinformatics, differentially expressed genes, non-small cell lung cancer, transcriptomics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910
71 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: Compliance Course, Corporate Training, Learner Behaviours, xAPI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 564
70 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: Algorithmic trading, automated investment system, DAX Deutscher Aktienindex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 699
69 Effect of Non-Metallic Inclusion from the Continuous Casting Process on the Multi-Stage Forging Process and the Tensile Strength of the Bolt: A Case Study

Authors: Tomasz Dubiel, Tadeusz Balawender, Mirosław Osetek

Abstract:

The paper presents the influence of non-metallic inclusions on the multi-stage forging process and the mechanical properties of the dodecagon socket bolt used in the automotive industry. The detected metallurgical defect was so large that it directly influenced the mechanical properties of the bolt and resulted in failure to meet the requirements of the mechanical property class. In order to assess the defect, an X-ray examination and metallographic examination of the defective bolt were performed, showing exogenous non-metallic inclusion. The size of the defect on the cross section was 0.531 mm in width and 1.523 mm in length; the defect was continuous along the entire axis of the bolt. In analysis, a finite element method (FEM) simulation of the multi-stage forging process was designed, taking into account a non-metallic inclusion parallel to the sample axis, reflecting the studied case. The process of defect propagation due to material upset in the head area was analyzed. The final forging stage in shaping the dodecagonal socket and filling the flange area was particularly studied. The effect of the defect was observed to significantly reduce the effective cross-section as a result of the expansion of the defect perpendicular to the axis of the bolt. The mechanical properties of products with and without the defect were analyzed. In the first step, the hardness test confirmed that the required value for the mechanical class 8.8 of both bolt types was obtained. In the second step, the bolts were subjected to a static tensile test. The bolts without the defect gave a positive result, while all 10 bolts with the defect gave a negative result, achieving a tensile strength below the requirements. Tensile strength tests were confirmed by metallographic tests and FEM simulation with perpendicular inclusion spread in the area of the head. The bolts were damaged directly under the bolt head, which is inconsistent with the requirements of ISO 898-1. It has been shown that non-metallic inclusions with orientation in accordance with the axis of the bolt can directly cause loss of functionality and these defects should be detected even before assembling in the machine element.

Keywords: continuous casting, multi-stage forging, non-metallic inclusion, upset bolt head

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 564
68 Optical Verification of an Ophthalmological Examination Apparatus Employing the Electroretinogram Function on Fundus-Related Perimetry

Authors: Naoto Suzuki

Abstract:

Japanese are affected by the most common causes of eyesight loss such as glaucoma, diabetic retinopathy, pigmentary retinal degeneration, and age-related macular degeneration. We developed an ophthalmological examination apparatus with a fundus camera, precisely fundus-related perimetry (microperimetry), and electroretinogram (ERG) functions to diagnose a variety of diseases that cause eyesight loss. The experimental apparatus was constructed with the same optical system as a fundus camera. The microperimetry optical system was calculated and added to the experimental apparatus using the German company Optenso's optical engineering software (OpTaliX-LT 10.8). We also added an Edmund infrared camera (EO-0413), a lens with a 25 mm focal length, a 45° cold mirror, a 12 V/50 W halogen lamp, and an 8-inch monitor. We made the artificial eye of a plane-convex lens, a black spacer, and a hemispherical cup. The hemispherical cup had a small section of the paper at the bottom. The artificial eye was photographed five times using the experimental apparatus. The software was created to display the examination target on the monitor and save examination data using C++Builder 10.2. The retinal fundus was displayed on the monitor at a length and width of 1 mm and a resolution of 70.4 ± 4.1 and 74.7 ± 6.8 pixels, respectively. The microperimetry and ERG functions were successfully added to the experimental ophthalmological apparatus. A moving machine was developed to measure the artificial eye's movement. The artificial eye's rear part was painted black and white in the central area. It was rotated 10 degrees from one side to the other. The movement was captured five times as motion videos. Three static images were extracted from one of the motion videos captured. The images display the artificial eye facing the center, right, and left directions. The three images were processed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2, including trimming, binarization, making a window, deleting peripheral area, and morphological operations. To calculate the artificial eye's fundus center, we added a gravity method to the program to calculate the gravity position of connected components. From the three images, the image processing could calculate the center position.

Keywords: Ophthalmological examination apparatus, microperimetry, electroretinogram, eye movement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 577
67 Retrieval Augmented Generation against the Machine: Merging Human Cyber Security Expertise with Generative AI

Authors: Brennan Lodge

Abstract:

Amidst a complex regulatory landscape, Retrieval Augmented Generation (RAG) emerges as a transformative tool for Governance Risk and Compliance (GRC) officers. This paper details the application of RAG in synthesizing Large Language Models (LLMs) with external knowledge bases, offering GRC professionals an advanced means to adapt to rapid changes in compliance requirements. While the development for standalone LLMs is exciting, such models do have their downsides. LLMs cannot easily expand or revise their memory, and they cannot straightforwardly provide insight into their predictions, and may produce “hallucinations.” Leveraging a pre-trained seq2seq transformer and a dense vector index of domain-specific data, this approach integrates real-time data retrieval into the generative process, enabling gap analysis and the dynamic generation of compliance and risk management content. We delve into the mechanics of RAG, focusing on its dual structure that pairs parametric knowledge contained within the transformer model with non-parametric data extracted from an updatable corpus. This hybrid model enhances decision-making through context-rich insights, drawing from the most current and relevant information, thereby enabling GRC officers to maintain a proactive compliance stance. Our methodology aligns with the latest advances in neural network fine-tuning, providing a granular, token-level application of retrieved information to inform and generate compliance narratives. By employing RAG, we exhibit a scalable solution that can adapt to novel regulatory challenges and cybersecurity threats, offering GRC officers a robust, predictive tool that augments their expertise. The granular application of RAG’s dual structure not only improves compliance and risk management protocols but also informs the development of compliance narratives with pinpoint accuracy. It underscores AI’s emerging role in strategic risk mitigation and proactive policy formation, positioning GRC officers to anticipate and navigate the complexities of regulatory evolution confidently.

Keywords: Retrieval Augmented Generation, Governance Risk and Compliance, Cybersecurity, AI-driven Compliance, Risk Management, Generative AI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150
66 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank

Authors: Jalal Haghighat Monfared, Zahra Akbari

Abstract:

Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.

Keywords: Business intelligence, business intelligence capability, decision making, decision quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1390
65 Estimation of Individual Power of Noise Sources Operating Simultaneously

Authors: Pankaj Chandna, Surinder Deswal, Arunesh Chandra, SK Sharma

Abstract:

Noise has adverse effect on human health and comfort. Noise not only cause hearing impairment, but it also acts as a causal factor for stress and raising systolic pressure. Additionally it can be a causal factor in work accidents, both by marking hazards and warning signals and by impeding concentration. Industry workers also suffer psychological and physical stress as well as hearing loss due to industrial noise. This paper proposes an approach to enable engineers to point out quantitatively the noisiest source for modification, while multiple machines are operating simultaneously. The model with the point source and spherical radiation in a free field was adopted to formulate the problem. The procedure works very well in ideal cases (point source and free field). However, most of the industrial noise problems are complicated by the fact that the noise is confined in a room. Reflections from the walls, floor, ceiling, and equipment in a room create a reverberant sound field that alters the sound wave characteristics from those for the free field. So the model was validated for relatively low absorption room at NIT Kurukshetra Central Workshop. The results of validation pointed out that the estimated sound power of noise sources under simultaneous conditions were on lower side, within the error limits 3.56 - 6.35 %. Thus suggesting the use of this methodology for practical implementation in industry. To demonstrate the application of the above analytical procedure for estimating the sound power of noise sources under simultaneous operating conditions, a manufacturing facility (Railway Workshop at Yamunanagar, India) having five sound sources (machines) on its workshop floor is considered in this study. The findings of the case study had identified the two most effective candidates (noise sources) for noise control in the Railway Workshop Yamunanagar, India. The study suggests that the modification in the design and/or replacement of these two identified noisiest sources (machine) would be necessary so as to achieve an effective reduction in noise levels. Further, the estimated data allows engineers to better understand the noise situations of the workplace and to revise the map when changes occur in noise level due to a workplace re-layout.

Keywords: Industrial noise, sound power level, multiple noise sources, sources contribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
64 Evaluating Complexity – Ethical Challenges in Computational Design Processes

Authors: J.Partanen

Abstract:

Complexity, as a theoretical background has made it easier to understand and explain the features and dynamic behavior of various complex systems. As the common theoretical background has confirmed, borrowing the terminology for design from the natural sciences has helped to control and understand urban complexity. Phenomena like self-organization, evolution and adaptation are appropriate to describe the formerly inaccessible characteristics of the complex environment in unpredictable bottomup systems. Increased computing capacity has been a key element in capturing the chaotic nature of these systems. A paradigm shift in urban planning and architectural design has forced us to give up the illusion of total control in urban environment, and consequently to seek for novel methods for steering the development. New methods using dynamic modeling have offered a real option for more thorough understanding of complexity and urban processes. At best new approaches may renew the design processes so that we get a better grip on the complex world via more flexible processes, support urban environmental diversity and respond to our needs beyond basic welfare by liberating ourselves from the standardized minimalism. A complex system and its features are as such beyond human ethics. Self-organization or evolution is either good or bad. Their mechanisms are by nature devoid of reason. They are common in urban dynamics in both natural processes and gas. They are features of a complex system, and they cannot be prevented. Yet their dynamics can be studied and supported. The paradigm of complexity and new design approaches has been criticized for a lack of humanity and morality, but the ethical implications of scientific or computational design processes have not been much discussed. It is important to distinguish the (unexciting) ethics of the theory and tools from the ethics of computer aided processes based on ethical decisions. Urban planning and architecture cannot be based on the survival of the fittest; however, the natural dynamics of the system cannot be impeded on grounds of being “non-human". In this paper the ethical challenges of using the dynamic models are contemplated in light of a few examples of new architecture and dynamic urban models and literature. It is suggested that ethical challenges in computational design processes could be reframed under the concepts of responsibility and transparency.

Keywords: urban planning, architecture, dynamic modeling, ethics, complexity theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
63 An Induction Motor Drive System with Intelligent Supervisory Control for Water Networks Including Storage Tank

Authors: O. S. Ebrahim, K. O. Shawky, M. A. Badr, P. K. Jain

Abstract:

This paper describes an efficient; low-cost; high-availability; induction motor (IM) drive system with intelligent supervisory control for water distribution networks including storage tank. To increase the operational efficiency and reduce cost, the IM drive system includes main pumping unit and an auxiliary voltage source inverter (VSI) fed unit. The main unit comprises smart star/delta starter, regenerative fluid clutch, switched VAR compensator, and hysteresis liquid-level controller. Three-state energy saving mode (ESM) is defined at no-load and a logic algorithm is developed for best energetic cost reduction. To reduce voltage sag, the supervisory controller operates the switched VAR compensator upon motor starting. To provide smart star/delta starter at low cost, a method based on current sensing is developed for interlocking, malfunction detection, and life–cycles counting and used to synthesize an improved fuzzy logic (FL) based availability assessment scheme. Furthermore, a recurrent neural network (RNN) full state estimator is proposed to provide sensor fault-tolerant algorithm for the feedback control. The auxiliary unit is working at low flow rates and improves the system efficiency and flexibility for distributed generation during islanding mode. Compared with doubly-fed IM, the proposed one ensures 30% working throughput under main motor/pump fault conditions, higher efficiency, and marginal cost difference. This is critically important in case of water networks. Theoretical analysis, computer simulations, cost study, as well as efficiency evaluation, using timely cascaded energy-conservative systems, are performed on IM experimental setup to demonstrate the validity and effectiveness of the proposed drive and control.

Keywords: Artificial Neural Network, ANN, Availability Assessment, Cloud Computing, Energy Saving, Induction Machine, IM, Supervisory Control, Fuzzy Logic, FL, Pumped Storage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 653
62 Development of a Miniature and Low-Cost IoT-Based Remote Health Monitoring Device

Authors: Sreejith Jayachandran, Mojtaba Ghodsi, Morteza Mohammadzaheri

Abstract:

The modern busy world is running behind new embedded technologies based on computers and software meanwhile some people are unable to monitor their health condition and regular medical check-ups. Some of them postpone medical check-ups due to a lack of time and convenience while others skip these regular evaluations and medical examinations due to huge medical bills and hospital expenses. In this research, we present a device in the telemonitoring system capable of monitoring, checking, and evaluating the health status of the human body remotely through the internet for the needs of all kinds of people. The remote health monitoring device is a microcontroller-based embedded unit. The various types of sensors in this device are connected to the human body, and with the help of an Arduino UNO board, the required analogue data are collected from the sensors. The microcontroller on the Arduino board processes the analogue data collected in this way into digital data and transfers that information to the cloud and stores it there; the processed digital data are then instantly displayed through the LCD attached to the machine. By accessing the cloud storage with a username and password, the concerned person’s health care teams/doctors, and other health staff can collect these data for the assessment and follow-up of that patient. Besides that, the family members/guardians can use and evaluate these data for awareness of the patient's current health status. Moreover, the system is connected to a GPS module. In emergencies, the concerned team can be positioning the patient or the person with this device. The setup continuously evaluates and transfers the data to the cloud and also the user can prefix a normal value range for the evaluation. For example, the blood pressure normal value is universally prefixed between 80/120 mmHg. Similarly, the Remote Health Monitoring System (RHMS) is also allowed to fix the range of values referred to as normal coefficients. This IoT-based miniature system 11×10×10 cm3 with a low weight of 500 gr only consumes 10 mW. This smart monitoring system is manufactured for 100 GBP (British Pound Sterling), and can facilitate the communication between patients and health systems, but also it can be employed for numerous other uses including communication sectors in the aerospace and transportation systems.

Keywords: Embedded Technology, Telemonitoring system, Microcontroller, Arduino UNO, Cloud storage, GPS, RHMS, Remote Health Monitoring System, Alert system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 273
61 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1322
60 Bone Mineral Density and Trabecular Bone Score in Ukrainian Women with Obesity

Authors: Vladyslav Povoroznyuk, Nataliia Dzerovych, Larysa Martynyuk, Tetiana Kovtun

Abstract:

Obesity and osteoporosis are the two diseases whose increasing prevalence and high impact on the global morbidity and mortality, during the two recent decades, have gained a status of major health threats worldwide. Obesity purports to affect the bone metabolism through complex mechanisms. Debated data on the connection between the bone mineral density and fracture prevalence in the obese patients are widely presented in literature. There is evidence that the correlation of weight and fracture risk is sitespecific. This study is aimed at determining the connection between the bone mineral density (BMD) and trabecular bone score (TBS) parameters in Ukrainian women suffering from obesity. We examined 1025 40-89-year-old women, divided them into the groups according to their body mass index: Group A included 360 women with obesity whose BMI was ≥30 kg/m2, and Group B – 665 women with no obesity and BMI of <30 kg/m2. The BMD of total body, lumbar spine at the site L1-L4, femur and forearm were measured by DXA (Prodigy, GEHC Lunar, Madison, WI, USA). The TBS of L1- L4 was assessed by means of TBS iNsight® software installed on our DXA machine (product of Med-Imaps, Pessac, France). In general, obese women had a significantly higher BMD of lumbar spine, femoral neck, proximal femur, total body and ultradistal forearm (p<0.001) in comparison with women without obesity. The TBS of L1-L4 was significantly lower in obese women compared to nonobese women (p<0.001). The BMD of lumbar spine, femoral neck and total body differed to a significant extent in women of 40-49, 50- 59, 60-69 and 70-79 years (p<0.05). At same time, in women aged 80-89 years the BMD of lumbar spine (p=0.09), femoral neck (p=0.22) and total body (p=0.06) barely differed. The BMD of ultradistal forearm was significantly higher in women of all age groups (p<0.05). The TBS of L1-L4 in all the age groups tended to reveal the lower parameters in obese women compared with the nonobese; however, those data were not statistically significant. By contrast, a significant positive correlation was observed between the fat mass and the BMD at different sites. The correlation between the fat mass and TBS of L1-L4 was also significant, although negative. Women with vertebral fractures had a significantly lower body weight, body mass index and total body fat mass in comparison with women without vertebral fractures in their anamnesis. In obese women the frequency of vertebral fractures was 27%, while in women without obesity – 57%.

Keywords: Bone mineral density, trabecular bone score, obesity, women.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
59 The Strategic Engine Model: Redefined Strategy Structure, as per Market-and Resource-Based Theory Application, Tested in the Automotive Industry

Authors: Krassimir Todorov

Abstract:

The purpose of the paper is to redefine the levels of structure of corporate, business and functional strategies that were established over the past several decades, to a conceptual model, consisting of corporate, business and operations strategies, that are reinforced by functional strategies. We will propose a conceptual framework of different perspectives in the role of strategic operations as a separate strategic place and reposition the remaining functional strategies as supporting tools, existing at all three levels. The proposed model is called ‘the strategic engine’, since the mutual relationships of its ingredients are identical with main elements and working principle of the internal combustion engine. Based on theoretical essence, related to every strategic level, we will prove that the strategic engine model is useful for managers seeking to safeguard the competitive advantage of their companies. Each strategy level is researched through its basic elements. At the corporate level we examine the scope of firm’s product, the vertical and geographical coverage. At the business level, the point of interest is limited to the SWOT analysis’ basic elements. While at operations level, the key research issue relates to the scope of the following performance indicators: cost, quality, speed, flexibility and dependability. In this relationship, the paper provides a different view for the role of operations strategy within the overall strategy concept. We will prove that the theoretical essence of operations goes far beyond the scope of traditionally accepted business functions. Exploring the applications of Resource-based theory and Market-based theory within the strategic levels framework, we will prove that there is a logical consequence of the theoretical impact in corporate, business and operations strategy – at every strategic level, the validity of one theory is substituted to the level of the other. Practical application of the conceptual model is tested in automotive industry. Actually, the proposed theoretical concept is inspired by a leading global automotive group – Inchcape PLC, listed on the London Stock Exchange, and constituent of the FTSE 250 Index.

Keywords: Business strategy, corporate strategy, functional strategies, operations strategy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 886
58 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
57 Rice Area Determination Using Landsat-Based Indices and Land Surface Temperature Values

Authors: Burçin Saltık, Levent Genç

Abstract:

In this study, it was aimed to determine a route for identification of rice cultivation areas within Thrace and Marmara regions of Turkey using remote sensing and GIS. Landsat 8 (OLI-TIRS) imageries acquired in production season of 2013 with 181/32 Path/Row number were used. Four different seasonal images were generated utilizing original bands and different transformation techniques. All images were classified individually using supervised classification techniques and Land Use Land Cover Maps (LULC) were generated with 8 classes. Areas (ha, %) of each classes were calculated. In addition, district-based rice distribution maps were developed and results of these maps were compared with Turkish Statistical Institute (TurkSTAT; TSI)’s actual rice cultivation area records. Accuracy assessments were conducted, and most accurate map was selected depending on accuracy assessment and coherency with TSI results. Additionally, rice areas on over 4° slope values were considered as mis-classified pixels and they eliminated using slope map and GIS tools. Finally, randomized rice zones were selected to obtain maximum-minimum value ranges of each date (May, June, July, August, September images separately) NDVI, LSWI, and LST images to test whether they may be used for rice area determination via raster calculator tool of ArcGIS. The most accurate classification for rice determination was obtained from seasonal LSWI LULC map, and considering TSI data and accuracy assessment results and mis-classified pixels were eliminated from this map. According to results, 83151.5 ha of rice areas exist within study area. However, this result is higher than TSI records with an area of 12702.3 ha. Use of maximum-minimum range of rice area NDVI, LSWI, and LST was tested in Meric district. It was seen that using the value ranges obtained from July imagery, gave the closest results to TSI records, and the difference was only 206.4 ha. This difference is normal due to relatively low resolution of images. Thus, employment of images with higher spectral, spatial, temporal and radiometric resolutions may provide more reliable results.

Keywords: Landsat 8 (OLI-TIRS), LULC, spectral indices, rice.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1302
56 A New Method for Extracting Ocean Wave Energy Utilizing the Wave Shoaling Phenomenon

Authors: Shafiq R. Qureshi, Syed Noman Danish, Muhammad Saeed Khalid

Abstract:

Fossil fuels are the major source to meet the world energy requirements but its rapidly diminishing rate and adverse effects on our ecological system are of major concern. Renewable energy utilization is the need of time to meet the future challenges. Ocean energy is the one of these promising energy resources. Threefourths of the earth-s surface is covered by the oceans. This enormous energy resource is contained in the oceans- waters, the air above the oceans, and the land beneath them. The renewable energy source of ocean mainly is contained in waves, ocean current and offshore solar energy. Very fewer efforts have been made to harness this reliable and predictable resource. Harnessing of ocean energy needs detail knowledge of underlying mathematical governing equation and their analysis. With the advent of extra ordinary computational resources it is now possible to predict the wave climatology in lab simulation. Several techniques have been developed mostly stem from numerical analysis of Navier Stokes equations. This paper presents a brief over view of such mathematical model and tools to understand and analyze the wave climatology. Models of 1st, 2nd and 3rd generations have been developed to estimate the wave characteristics to assess the power potential. A brief overview of available wave energy technologies is also given. A novel concept of on-shore wave energy extraction method is also presented at the end. The concept is based upon total energy conservation, where energy of wave is transferred to the flexible converter to increase its kinetic energy. Squeezing action by the external pressure on the converter body results in increase velocities at discharge section. High velocity head then can be used for energy storage or for direct utility of power generation. This converter utilizes the both potential and kinetic energy of the waves and designed for on-shore or near-shore application. Increased wave height at the shore due to shoaling effects increases the potential energy of the waves which is converted to renewable energy. This approach will result in economic wave energy converter due to near shore installation and more dense waves due to shoaling. Method will be more efficient because of tapping both potential and kinetic energy of the waves.

Keywords: Energy Utilizing, Wave Shoaling Phenomenon

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2676
55 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: Deep learning, long-short-term memory, energy, renewable energy load forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1603
54 Waste Management in a Hot Laboratory of Japan Atomic Energy Agency – 1: Overview and Activities in Chemical Processing Facility

Authors: Kazunori Nomura, Hiromichi Ogi, Masaumi Nakahara, Sou Watanabe, Atsuhiro Shibata

Abstract:

Chemical Processing Facility of Japan Atomic Energy Agency is a basic research field for advanced back-end technology developments with using actual high-level radioactive materials such as irradiated fuels from the fast reactor, high-level liquid waste from reprocessing plant. In the nature of a research facility, various kinds of chemical reagents have been offered for fundamental tests. Most of them were treated properly and stored in the liquid waste vessel equipped in the facility, but some were not treated and remained at the experimental space as a kind of legacy waste. It is required to treat the waste in safety. On the other hand, we formulated the Medium- and Long-Term Management Plan of Japan Atomic Energy Agency Facilities. This comprehensive plan considers Chemical Processing Facility as one of the facilities to be decommissioned. Even if the plan is executed, treatment of the “legacy” waste beforehand must be a necessary step for decommissioning operation. Under this circumstance, we launched a collaborative research project called the STRAD project, which stands for Systematic Treatment of Radioactive liquid waste for Decommissioning, in order to develop the treatment processes for wastes of the nuclear research facility. In this project, decomposition methods of chemicals causing a troublesome phenomenon such as corrosion and explosion have been developed and there is a prospect of their decomposition in the facility by simple method. And solidification of aqueous or organic liquid wastes after the decomposition has been studied by adding cement or coagulants. Furthermore, we treated experimental tools of various materials with making an effort to stabilize and to compact them before the package into the waste container. It is expected to decrease the number of transportation of the solid waste and widen the operation space. Some achievements of these studies will be shown in this paper. The project is expected to contribute beneficial waste management outcome that can be shared world widely.

Keywords: Chemical Processing Facility, medium- and long-term management plan of JAEA Facilities, STRAD project, treatment of radioactive waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 882
53 Comparative Study of Sedimentation in Hydraulic Structures using Sharc and Ssiim Soft Wares - A Case of the Dez and Hamidieh Intake Structures in Iran

Authors: A.H. Sajedipoor, N. Hedayat , M. Mashal, R. Nazarzadeh

Abstract:

Sedimentation formation is a complex hydraulic phenomenon that has emerged as a major operational and maintenance consideration in modern hydraulic engineering in general and river engineering in particular. Sediments accumulation along the river course and their eventual storage in a form of islands affect water intake in the canal systems that are fed by the storage reservoirs. Without proper management, sediment transport can lead to major operational challenges in water distribution system of arid regions like the Dez and Hamidieh command areas. The paper aims to investigate sedimentation in the Western Canal of Dez Diversion Weir using the SHARC model and compare the results with the two intake structures of the Hamidieh dam in Iran using SSIIM model. The objective was to identify the factors which influence the process, check reliability of outcome and provide ways in which to mitigate the implications on operation and maintenance of the structures. Results estimated sand and silt bed loads concentrations to be 193 ppm and 827ppm respectively. This followed ,ore or less similar pattern in Hamidieh where the sediment formation impeded water intake in the canal system. Given the available data on average annual bed loads and average suspended sediment loads of 165ppm and 837ppm in the Dez, there was a significant statistical difference (16%) between the sand grains, whereas no significant difference (1.2%) was find in the silt grain sizes. One explanation for such finding being that along the 6 Km river course there was considerable meandering effects which explains recent shift in the hydraulic behavior along the stream course under investigation. The sand concentration in downstream relative to present state of the canal showed a steep descending curve. Sediment trapping on the other hand indicated a steep ascending curve. These occurred because the diversion weir was not considered in the simulation model. The comparative study showed very close similarities in the results which explains the fact that both software can be used as accurate and reliable analytical tools for simulation of the sedimentation in hydraulic engineering.

Keywords: SHARC, SSIIM, sedimentation, Dez diversion weir, Hamidieh dam, Intake structures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1762
52 Assessing the Theoretical Suitability of Sentinel-2 and WorldView-3 Data for Hydrocarbon Mapping of Spill Events, Using HYSS

Authors: K. Tunde Olagunju, C. Scott Allen, F.D. (Freek) van der Meer

Abstract:

Identification of hydrocarbon oil in remote sensing images is often the first step in monitoring oil during spill events. Most remote sensing methods adopt techniques for hydrocarbon identification to achieve detection in order to model an appropriate cleanup program. Identification on optical sensors does not only allow for detection but also for characterization and quantification. Until recently, in optical remote sensing, quantification and characterization were only potentially possible using high-resolution laboratory and airborne imaging spectrometers (hyperspectral data). Unlike multispectral, hyperspectral data are not freely available, as this data category is mainly obtained via airborne survey at present. In this research, two operational high-resolution multispectral satellites (WorldView-3 and Sentinel-2) are theoretically assessed for their suitability for hydrocarbon characterization, using the Hydrocarbon Spectra Slope model (HYSS). This method utilized the two most persistent hydrocarbon diagnostic/absorption features at 1.73 µm and 2.30 µm for hydrocarbon mapping on multispectral data. In this research, spectra measurement of seven different hydrocarbon oils (crude and refined oil) taken on 10 different substrates with the use of laboratory ASD Fieldspec were convolved to Sentinel-2 and WorldView-3 resolution, using their full width half maximum (FWHM) parameter. The resulting hydrocarbon slope values obtained from the studied samples enable clear qualitative discrimination of most hydrocarbons, despite the presence of different background substrates, particularly on WorldView-3. Due to close conformity of central wavelengths and narrow bandwidths to key hydrocarbon bands used in HYSS, the statistical significance for qualitative analysis on WorldView-3 sensors for all studied hydrocarbon oil returned with 95% confidence level (P-value ˂ 0.01), except for Diesel. Using multifactor analysis of variance (MANOVA), the discriminating power of HYSS is statistically significant for most hydrocarbon-substrate combinations on Sentinel-2 and WorldView-3 FWHM, revealing the potential of these two operational multispectral sensors as rapid response tools for hydrocarbon mapping. One notable exception is highly transmissive hydrocarbons on Sentinel-2 data due to the non-conformity of spectral bands with key hydrocarbon absorptions and the relatively coarse bandwidth (> 100 nm).

Keywords: hydrocarbon, oil spill, remote sensing, hyperspectral, multispectral, hydrocarbon – substrate combination, Sentinel-2, WorldView-3

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 712
51 Real Time Control Learning Game - Speed Race by Learning at the Wheel - Development of Data Acquisition System

Authors: Κonstantinos Kalovrektis, Chryssanthi Palazi

Abstract:

Schools today face ever-increasing demands in their attempts to ensure that students are well equipped to enter the workforce and navigate a complex world. Research indicates that computer technology can help support learning, implementation of various experiments or learning games, and that it is especially useful in developing the higher-order skills of critical thinking, observation, comprehension, implementation, comparison, analysis and active attention to activities such as research, field work, simulations and scientific inquiry. The ICT in education supports the learning procedure by enabling it to be more flexible and effective, create a rich and attractive training environment and equip the students with knowledge and potential useful for the competitive social environment in which they live. This paper presents the design, the development, and the results of the evaluation analysis of an interactive educational game which using real electric vehicles - toys (material) on a toy race track. When the game starts each student selects a specific vehicle toy. Then students are answering questionnaires in the computer. The vehicles' speed is related to the percentage of right answers in a multiple choice questionnaire (software). Every question has its own significant value depending of the different level of questionnaire. Via the developed software, each right or wrong answers in questionnaire increase or decrease the real time speed of their vehicle toys. Moreover the rate of vehicle's speed increase or decrease depends on the difficulty level of each question. The aim of the work is to attract the student’s interest in a learning process and also to improve their scores. The developed real time game was tested using independent populations of students of age groups: 8-10, 11-14, 15-18 years. Standard educational and statistical analysis tools were used for the evaluation analysis of the game. Results reveal that students using the developed real time control game scored much higher (60%) than students using a traditional simulation game on the same questionnaire. Results further indicate that student's interest in repeating the developed real time control gaming was far higher (70%) than the interest of students using a traditional simulation game.

Keywords: Real time game, sensor, learning games, LabVIEW

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1733
50 IntelligentLogger: A Heavy-Duty Vehicles Fleet Management System Based on IoT and Smart Prediction Techniques

Authors: D. Goustouridis, A. Sideris, I. Sdrolias, G. Loizos, N.-Alexander Tatlas, S. M. Potirakis

Abstract:

Both daily and long-term management of a heavy-duty vehicles and construction machinery fleet is an extremely complicated and hard to solve issue. This is mainly due to the diversity of the fleet vehicles – machinery, which concerns not only the vehicle types, but also their age/efficiency, as well as the fleet volume, which is often of the order of hundreds or even thousands of vehicles/machineries. In the present paper we present “InteligentLogger”, a holistic heavy-duty fleet management system covering a wide range of diverse fleet vehicles. This is based on specifically designed hardware and software for the automated vehicle health status and operational cost monitoring, for smart maintenance. InteligentLogger is characterized by high adaptability that permits to be tailored to practically any heavy-duty vehicle/machinery (of different technologies -modern or legacy- and of dissimilar uses). Contrary to conventional logistic systems, which are characterized by raised operational costs and often errors, InteligentLogger provides a cost-effective and reliable integrated solution for the e-management and e-maintenance of the fleet members. The InteligentLogger system offers the following unique features that guarantee successful heavy-duty vehicles/machineries fleet management: (a) Recording and storage of operating data of motorized construction machinery, in a reliable way and in real time, using specifically designed Internet of Things (IoT) sensor nodes that communicate through the available network infrastructures, e.g., 3G/LTE; (b) Use on any machine, regardless of its age, in a universal way; (c) Flexibility and complete customization both in terms of data collection, integration with 3rd party systems, as well as in terms of processing and drawing conclusions; (d) Validation, error reporting & correction, as well as update of the system’s database; (e) Artificial intelligence (AI) software, for processing information in real time, identifying out-of-normal behavior and generating alerts; (f) A MicroStrategy based enterprise BI, for modeling information and producing reports, dashboards, and alerts focusing on vehicles– machinery optimal usage, as well as maintenance and scraping policies; (g) Modular structure that allows low implementation costs in the basic fully functional version, but offers scalability without requiring a complete system upgrade.

Keywords: E-maintenance, predictive maintenance, IoT sensor nodes, cost optimization, artificial intelligence, heavy-duty vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 774
49 Sustainable Geographic Information System-Based Map for Suitable Landfill Sites in Aley and Chouf, Lebanon

Authors: Allaw Kamel, Bazzi Hasan

Abstract:

Municipal solid waste (MSW) generation is among the most significant sources which threaten the global environmental health. Solid Waste Management has been an important environmental problem in developing countries because of the difficulties in finding sustainable solutions for solid wastes. Therefore, more efforts are needed to be implemented to overcome this problem. Lebanon has suffered a severe solid waste management problem in 2015, and a new landfill site was proposed to solve the existing problem. The study aims to identify and locate the most suitable area to construct a landfill taking into consideration the sustainable development to overcome the present situation and protect the future demands. Throughout the article, a landfill site selection methodology was discussed using Geographic Information System (GIS) and Multi Criteria Decision Analysis (MCDA). Several environmental, economic and social factors were taken as criterion for selection of a landfill. Soil, geology, and LUC (Land Use and Land Cover) indices with the Sustainable Development Index were main inputs to create the final map of Environmentally Sensitive Area (ESA) for landfill site. Different factors were determined to define each index. Input data of each factor was managed, visualized and analyzed using GIS. GIS was used as an important tool to identify suitable areas for landfill. Spatial Analysis (SA), Analysis and Management GIS tools were implemented to produce input maps capable of identifying suitable areas related to each index. Weight has been assigned to each factor in the same index, and the main weights were assigned to each index used. The combination of the different indices map generates the final output map of ESA. The output map was reclassified into three suitability classes of low, moderate, and high suitability. Results showed different locations suitable for the construction of a landfill. Results also reflected the importance of GIS and MCDA in helping decision makers finding a solution of solid wastes by a sanitary landfill.

Keywords: Sustainable development, landfill, municipal solid waste, geographic information system, GIS, multi criteria decision analysis, environmentally sensitive area.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 886