Search results for: operating time
17058 A Research of the Prototype Fuel Injector for the Aircraft Two-Stroke Opposed-Piston Diesel Engine
Authors: Ksenia Siadkowska, Zbigniew Czyz, Lukasz Grabowski
Abstract:
The paper presents the research results of the construction of an injector with a modified injection nozzle. The injector is designed for a prototype aircraft opposed-piston diesel engine with an assumed starting power of 100 kW. The injector has been subjected to optical tests carried out in a constant volume chamber with the use of a camera allowing to record images at the frequency of 5400 fps and at the resolution of 1024x1024. The measurements were based on a Mie scattering technique with global lighting. Seven repetitions were made for a specific measurement point. The measuring point was selected on the basis of the analysis of engine operating conditions. The analysis focused on the average range of the spray and its distribution. As a result of the conducted research, the range of the fuel spray was defined for the determined parameters of injection. The obtained results were used to verify and optimize the combustion process in the designed opposed-piston two-stroke diesel engine. Acknowledgment: This work has been realized in the cooperation with The Construction Office of WSK 'PZL-KALISZ' S.A.' and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: diesel engine, opposed-piston, aircraft, fuel injector
Procedia PDF Downloads 12817057 Online Monitoring of Airborne Bioaerosols Released from a Composting, Green Waste Site
Authors: John Sodeau, David O'Connor, Shane Daly, Stig Hellebust
Abstract:
This study is the first to employ the online WIBS (Waveband Integrated Biosensor Sensor) technique for the monitoring of bioaerosol emissions and non-fluorescing “dust” released from a composting/green waste site. The purpose of the research was to provide a “proof of principle” for using WIBS to monitor such a location continually over days and nights in order to construct comparative “bioaerosol site profiles”. Current impaction/culturing methods take many days to achieve results available by the WIBS technique in seconds.The real-time data obtained was then used to assess variations of the bioaerosol counts as a function of size, “shape”, site location, working activity levels, time of day, relative humidity, wind speeds and wind directions. Three short campaigns were undertaken, one classified as a “light” workload period, another as a “heavy” workload period and finally a weekend when the site was closed. One main bioaerosol size regime was found to predominate: 0.5 micron to 3 micron with morphologies ranging from elongated to elipsoidal/spherical. The real-time number-concentration data were consistent with an Andersen sampling protocol that was employed at the site. The number-concentrations of fluorescent particles as a proportion of total particles counted amounted, on average, to ~1% for the “light” workday period, ~7% for the “heavy” workday period and ~18% for the weekend. The bioaerosol release profiles at the weekend were considerably different from those monitored during the working weekdays.Keywords: bioaerosols, composting, fluorescence, particle counting in real-time
Procedia PDF Downloads 35517056 Alterations in the Abundance of Ruminal Microbial Species during the Peripartal Period in Dairy Cows
Authors: S. Alqarni, J. C. McCann, A. Palladino, J. J. Loor
Abstract:
Seven fistulated Holstein cows were used from 3 weeks prepartum to 4 weeks postpartum to determine the relative abundance of 7 different species of ruminal microorganisms. The prepartum diet was based on corn silage. In the postpartum, diet included ground corn, grain by-products, and alfalfa haylage. Ruminal digesta were collected at five times: -14, -7, 10, 20, and 28 days around parturition. Total DNA from ruminal digesta was isolated and real-time quantitative PCR was used to determine the relative abundance of bacterial species. Eubacterium ruminantium and Selenomonas ruminantium were not affected by time (P>0.05). Megasphaera elsdenii and Prevotella bryantii increased significantly postpartum (P<0.001). Conversely, Butyrivibrio proteoclasticus decreased gradually from -14 through 28 days (P<0.001). Fibrobacter succinogenes was affected by time being lowest at day 10 (P=0.02) while Anaerovibrio lipolytica recorded the lowest abundance at -7 d followed by an increase by 20 days postpartum (P<0.001). Overall, these results indicate that changes in diet after parturition affect the abundance of ruminal bacteria, particularly M. elsdenii (a lactate-utilizing bacteria) and P. bryantii (a starch-degrading bacteria) which increased markedly after parturition likely as a consequence of a higher concentrate intake.Keywords: rumen bacteria, transition cows, rumen metabolism, peripartal period
Procedia PDF Downloads 56917055 Artificial Neural Network Reconstruction of Proton Exchange Membrane Fuel Cell Output Profile under Transient Operation
Abstract:
Unbalanced power output from individual cells of Proton Exchange Membrane Fuel Cell (PEMFC) has direct effects on PEMFC stack performance, in particular under transient operation. In the paper, a multi-layer ANN (Artificial Neural Network) model Radial Basis Functions (RBF) has been developed for predicting cells' output profiles by applying gas supply parameters, cooling conditions, temperature measurement of individual cells, etc. The feed-forward ANN model was validated with experimental data. Influence of relevant parameters of RBF on the network accuracy was investigated. After adequate model training, the modelling results show good correspondence between actual measurements and reconstructed output profiles. Finally, after the model was used to optimize the stack output performance under steady-state and transient operating conditions, it suggested that the developed ANN control model can help PEMFC stack to have obvious improvement on power output under fast acceleration process.Keywords: proton exchange membrane fuel cell, PEMFC, artificial neural network, ANN, cell output profile, transient
Procedia PDF Downloads 16917054 An Architecture Framework for Design of Assembly Expert System
Authors: Chee Fai Tan, L. S. Wahidin, S. N. Khalil
Abstract:
Nowadays, manufacturing cost is one of the important factors that will affect the product cost as well as company profit. There are many methods that have been used to reduce the manufacturing cost in order for a company to stay competitive. One of the factors that effect manufacturing cost is the time. Expert system can be used as a method to reduce the manufacturing time. The purpose of the expert system is to diagnose and solve the problem of design of assembly. The paper describes an architecture framework for design of assembly expert system that focuses on commercial vehicle seat manufacturing industry.Keywords: design of assembly, expert system, vehicle seat, mechanical engineering
Procedia PDF Downloads 43817053 Reliability and Probability Weighted Moment Estimation for Three Parameter Mukherjee-Islam Failure Model
Authors: Ariful Islam, Showkat Ahmad Lone
Abstract:
The Mukherjee-Islam Model is commonly used as a simple life time distribution to assess system reliability. The model exhibits a better fit for failure information and provides more appropriate information about hazard rate and other reliability measures as shown by various authors. It is possible to introduce a location parameter at a time (i.e., a time before which failure cannot occur) which makes it a more useful failure distribution than the existing ones. Even after shifting the location of the distribution, it represents a decreasing, constant and increasing failure rate. It has been shown to represent the appropriate lower tail of the distribution of random variables having fixed lower bound. This study presents the reliability computations and probability weighted moment estimation of three parameter model. A comparative analysis is carried out between three parameters finite range model and some existing bathtub shaped curve fitting models. Since probability weighted moment method is used, the results obtained can also be applied on small sample cases. Maximum likelihood estimation method is also applied in this study.Keywords: comparative analysis, maximum likelihood estimation, Mukherjee-Islam failure model, probability weighted moment estimation, reliability
Procedia PDF Downloads 27417052 In vitro Effects of Berberine on the Vitality and Oxidative Profile of Bovine Spermatozoa
Authors: Eva Tvrdá, Hana Greifová, Peter Ivanič, Norbert Lukáč
Abstract:
The aim of this study was to evaluate the dose- and time-dependent in vitro effects of berberine (BER), a natural alkaloid with numerous biological properties on bovine spermatozoa during three time periods (0 h, 2 h, 24 h). Bovine semen samples were diluted and cultivated in physiological saline solution containing 0.5% DMSO together with 200, 100, 50, 10, 5, and 1 μmol/L BER. Spermatozoa motility was assessed using the computer assisted semen analyzer. The viability of spermatozoa was assessed by the metabolic (MTT) assay, production of superoxide radicals was quantified using the nitroblue tetrazolium (NBT) test, and chemiluminescence was used to evaluate the generation of reactive oxygen species (ROS). Cell lysates were prepared and the extent of lipid peroxidation (LPO) was evaluated using the TBARS assay. The results of the movement activity showed a significant increase in the motility during long term cultivation in case of concentrations ranging between 1 and 10 μmol/L BER (P < 0.01; P < 0.001; 24 h). At the same time, supplementation of 1, 5 and 10 μmol/L BER led to a significant preservation of the cell viability (P < 0.001; 24 h). BER addition at a range of 1-50 μmol/L also provided a significantly higher protection against superoxide (P < 0.05) and ROS (P < 0.001; P < 0.01) overgeneration as well as LPO (P < 0.01; P<0.05) after a 24 h cultivation. We may suggest that supplementation of BER to bovine spermatozoa, particularly at concentrations ranging between 1 and 50 μmol/L, may offer protection to the motility, viability and oxidative status of the spermatozoa, particularly notable at 24 h.Keywords: berberine, bulls, motility, oxidative profile, spermatozoa, viability
Procedia PDF Downloads 13017051 Duplicated Common Bile Duct: A Recipe for Injury
Authors: David Armany, Matthew Allaway, Preet Gosal, Senarath Edirimanne
Abstract:
A potentially devastating complication of routine laparoscopic cholecystectomy includes iatrogenic bile duct injuries, which represent a stable incidence rate of 0.3% over the past three decades. Whilst related to several relative risks such as surgeon experience and patient factors (older age, male sex), misinterpretation of biliary tree anatomy remains the most common cause, accounting for 80% of iatrogenic Common Bile Duct injuries. Whilst extremely rare, a duplicate common bile duct anomaly remains a potential variation to encounter during biliary surgery, with 30 recognised cases in the worldwide literature, of which type Vb accounts for 4. We report the case of a rare type Vb variation encountered during intra-operative laparoscopic cholecystectomy and confirmed on cholangiogram. To our knowledge, this is the first documented Type Vb case encountered in an Australian population. Given these anomalies are asymptomatic and can perpetuate iatrogenic common bile duct injuries, awareness of all subtypes is crucial. Irrevocably, preoperative Magnetic Resonance Cholangiopancreatography can help recognise these anomalies before the operating theatre; however, their widespread adoption is limited by expensive and availability.Keywords: duplicated common bile duct, type Vb, cholecystitis, MRCP, cholangiogram, iatrogenic CBD
Procedia PDF Downloads 9017050 Control of Pipeline Gas Quality to Extend Gas Turbine Life
Authors: Peter J. H. Carnell, Panayiotis Theophanous
Abstract:
Natural gas due to its cleaner combustion characteristics is expected to be the most widely used fuel in the move towards less polluting and renewable energy sources. Thus, the developed world is supplied by a complex network of gas pipelines and natural gas is becoming a major source of fuel. Natural gas delivered directly from the well will differ in composition from gas derived from LNG or produced by anaerobic digestion processes. Each will also have specific contaminants and properties although gas from all sources is likely to enter the distribution system and be blended to provide the desired characteristics such as Higher Heating Value and Wobbe No. The absence of a standard gas composition poses problems when the gas is used as a chemical feedstock, in specialised furnaces or on gas turbines. The chemical industry has suffered in the past as a result of variable gas composition. Transition metal catalysts used in ammonia, methanol and hydrogen plants were easily poisoned by sulphur, chlorides and mercury reducing both activity and catalyst expected lives from years to months. These plants now concentrate on purification and conditioning of the natural gas feed using fixed bed technologies, allowing them to run for several years and having transformed their operations. Similar technologies can be applied to the power industry reducing maintenance requirements and extending the operating life of gas turbines.Keywords: gas composition, gas conditioning, gas turbines, power generation, purification
Procedia PDF Downloads 28617049 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs
Authors: H. M. Soroush
Abstract:
The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.Keywords: number of late jobs, scheduling, single server, stochastic
Procedia PDF Downloads 49817048 Fixed-Frequency Pulse Width Modulation-Based Sliding Mode Controller for Switching Multicellular Converter
Authors: Rihab Hamdi, Amel Hadri Hamida, Ouafae Bennis, Fatima Babaa, Sakina Zerouali
Abstract:
This paper features a sliding mode controller (SMC) for closed-loop voltage control of DC-DC three-cells buck converter connected in parallel, operating in continuous conduction mode (CCM), based on pulse-width modulation (PWM). To maintain the switching frequency, the approach is to incorporate a pulse-width modulation that utilizes an equivalent control, inferred by applying the SM control method, to produce a control sign to be contrasted and the fixed-frequency within the modulator. Detailed stability and transient performance analysis have been conducted using Lyapunov stability criteria to restrict the switching frequency variation facing wide variations in output load, input changes, and set-point changes. The results obtained confirm the effectiveness of the proposed control scheme in achieving an enhanced output transient performance while faithfully realizing its control objective in the event of abrupt and uncertain parameter variations. Simulations studies in MATLAB/Simulink environment are performed to confirm the idea.Keywords: DC-DC converter, pulse width modulation, power electronics, sliding mode control
Procedia PDF Downloads 14717047 Effect of Distance Education Students Motivation with the Turkish Language and Literature Course
Authors: Meva Apaydin, Fatih Apaydin
Abstract:
Role of education in the development of society is great. Teaching and training started with the beginning of the history and different methods and techniques which have been applied as the time passed and changed everything with the aim of raising the level of learning. In addition to the traditional teaching methods, technology has been used in recent years. With the beginning of the use of internet in education, some problems which could not be soluted till that time has been dealt and it is inferred that it is possible to educate the learners by using contemporary methods as well as traditional methods. As an advantage of technological developments, distance education is a system which paves the way for the students to be educated individually wherever and whenever they like without the needs of physical school environment. Distance education has become prevalent because of the physical inadequacies in education institutions, as a result; disadvantageous circumstances such as social complexities, individual differences and especially geographical distance disappear. What’s more, the high-speed of the feedbacks between teachers and learners, improvement in student motivation because there is no limitation of time, low-cost, the objective measuring and evaluation are on foreground. In spite of the fact that there is teaching beneficences in distance education, there are also limitations. Some of the most important problems are that : Some problems which are highly possible to come across may not be solved in time, lack of eye-contact between the teacher and the learner, so trust-worthy feedback cannot be got or the problems stemming from the inadequate technological background are merely some of them. Courses are conducted via distance education in many departments of the universities in our country. In recent years, giving lectures such as Turkish Language, English, and History in the first grades of the academic departments in the universities is an application which is constantly becoming prevalent. In this study, the application of Turkish Language course via distance education system by analyzing advantages and disadvantages of the distance education system which is based on internet.Keywords: distance education, Turkish language, motivation, benefits
Procedia PDF Downloads 43717046 Prime Mover Sizing for Base-Loaded Combined Heating and Power Systems
Authors: Djalal Boualili
Abstract:
This article considers the problem of sizing prime movers for combined heating and power (CHP) systems operating at full load to satisfy a fraction of a facility's electric load, i.e. a base load. Prime mover sizing is examined using three criteria: operational cost, carbon dioxide emissions (CDE), and primary energy consumption (PEC). The sizing process leads to consider ratios of conversion factors applied to imported electricity to conversion factors applied to fuel consumed. These ratios are labelled RCost, R CDE, R PEC depending on whether the conversion factors are associated with operational cost, CDE, or PEC, respectively. Analytical results show that in order to achieve savings in operational cost, CDE, or PEC, the ratios must be larger than a unique constant R Min that only depends on the CHP components efficiencies. Savings in operational cost, CDE, or PEC due to CHP operation are explicitly formulated using simple equations. This facilitates the process of comparing the tradeoffs of optimizing the savings of one criterion over the other two – a task that has traditionally been accomplished through computer simulations. A hospital building, located in Chlef, Algeria, was used as an example to apply the methodology presented in this article.Keywords: sizing, heating and power, ratios, energy consumption, carbon dioxide emissions
Procedia PDF Downloads 23117045 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time
Procedia PDF Downloads 28117044 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 23117043 Hohmann Transfer and Bi-Elliptic Hohmann Transfer in TRAPPIST-1 System
Authors: Jorge L. Nisperuza, Wilson Sandoval, Edward. A. Gil, Johan A. Jimenez
Abstract:
In orbital mechanics, an active research topic is the calculation of interplanetary trajectories efficient in terms of energy and time. In this sense, this work concerns the calculation of the orbital elements for sending interplanetary probes in the extrasolar system TRAPPIST-1. Specifically, using the mathematical expressions of the circular and elliptical trajectory parameters, expressions for the flight time and the orbital transfer rate increase between orbits, the orbital parameters and the graphs of the trajectories of Hohmann and Hohmann bi-elliptic for sending a probe from the innermost planet to all the other planets of the studied system, are obtained. The relationship between the orbital transfer rate increments and the relationship between the flight times for the two transfer types is found. The results show that, for all cases under consideration, the Hohmann transfer results to be the least energy and temporary cost, a result according to the theory associated with Hohmann and Hohmann bi-elliptic transfers. Saving in the increase of the speed reaches up to 87% was found, and it happens for the transference between the two innermost planets, whereas the time of flight increases by a factor of up to 6.6 if one makes use of the bi-elliptic transfer, this for the case of sending a probe from the innermost planet to the outermost.Keywords: bi-elliptic Hohmann transfer, exoplanet, extrasolar system, Hohmann transfer, TRAPPIST-1
Procedia PDF Downloads 19317042 Designing and Implementing a Tourist-Guide Web Service Based on Volunteer Geographic Information Using Open-Source Technologies
Authors: Javad Sadidi, Ehsan Babaei, Hani Rezayan
Abstract:
The advent of web 2.0 gives a possibility to scale down the costs of data collection and mapping, specifically if the process is done by volunteers. Every volunteer can be thought of as a free and ubiquitous sensor to collect spatial, descriptive as well as multimedia data for tourist services. The lack of large-scale information, such as real-time climate and weather conditions, population density, and other related data, can be considered one of the important challenges in developing countries for tourists to make the best decision in terms of time and place of travel. The current research aims to design and implement a spatiotemporal web map service using volunteer-submitted data. The service acts as a tourist-guide service in which tourists can search interested places based on their requested time for travel. To design the service, three tiers of architecture, including data, logical processing, and presentation tiers, have been utilized. For implementing the service, open-source software programs, client and server-side programming languages (such as OpenLayers2, AJAX, and PHP), Geoserver as a map server, and Web Feature Service (WFS) standards have been used. The result is two distinct browser-based services, one for sending spatial, descriptive, and multimedia volunteer data and another one for tourists and local officials. Local official confirms the veracity of the volunteer-submitted information. In the tourist interface, a spatiotemporal search engine has been designed to enable tourists to find a tourist place based on province, city, and location at a specific time of interest. Implementing the tourist-guide service by this methodology causes the following: the current tourists participate in a free data collection and sharing process for future tourists, a real-time data sharing and accessing for all, avoiding a blind selection of travel destination and significantly, decreases the cost of providing such services.Keywords: VGI, tourism, spatiotemporal, browser-based, web mapping
Procedia PDF Downloads 9817041 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 10117040 Multichannel Surface Electromyography Trajectories for Hand Movement Recognition Using Intrasubject and Intersubject Evaluations
Authors: Christina Adly, Meena Abdelmeseeh, Tamer Basha
Abstract:
This paper proposes a system for hand movement recognition using multichannel surface EMG(sEMG) signals obtained from 40 subjects using 40 different exercises, which are available on the Ninapro(Non-Invasive Adaptive Prosthetics) database. First, we applied processing methods to the raw sEMG signals to convert them to their amplitudes. Second, we used deep learning methods to solve our problem by passing the preprocessed signals to Fully connected neural networks(FCNN) and recurrent neural networks(RNN) with Long Short Term Memory(LSTM). Using intrasubject evaluation, The accuracy using the FCNN is 72%, with a processing time for training around 76 minutes, and for RNN's accuracy is 79.9%, with 8 minutes and 22 seconds processing time. Third, we applied some postprocessing methods to improve the accuracy, like majority voting(MV) and Movement Error Rate(MER). The accuracy after applying MV is 75% and 86% for FCNN and RNN, respectively. The MER value has an inverse relationship with the prediction delay while varying the window length for measuring the MV. The different part uses the RNN with the intersubject evaluation. The experimental results showed that to get a good accuracy for testing with reasonable processing time, we should use around 20 subjects.Keywords: hand movement recognition, recurrent neural network, movement error rate, intrasubject evaluation, intersubject evaluation
Procedia PDF Downloads 14217039 A Drawing Software for Designers: AutoCAD
Authors: Mayar Almasri, Rosa Helmi, Rayana Enany
Abstract:
This report describes the features of AutoCAD software released by Adobe. It explains how the program makes it easier for engineers and designers and reduces their time and effort spent using AutoCAD. Moreover, it highlights how AutoCAD works, how some of the commands used in it, such as Shortcut, make it easy to use, and features that make it accurate in measurements. The results of the report show that most users of this program are designers and engineers, but few people know about it and find it easy to use. They prefer to use it because it is easy to use, and the shortcut commands shorten a lot of time for them. The feature got a high rate and some suggestions for improving AutoCAD in Aperture, but it was a small percentage, and the highest percentage was that they didn't need to improve the program, and it was good.Keywords: artificial intelligence, design, planning, commands, autodesk, dimensions
Procedia PDF Downloads 13117038 Mathematical and Numerical Analysis of a Reaction Diffusion System of Lambda-Omega Type
Authors: Hassan Al Salman, Ahmed Al Ghafli
Abstract:
In this study we consider a nonlinear in time finite element approximation of a reaction diffusion system of lambda-omega type. We use a fixed point theorem to prove existence of the approximations. Then, we derive some essential stability estimates and discuss the uniqueness of the approximations. Also, we prove an optimal error bound in time for d=1, 2 and 3 space dimensions. Finally, we present some numerical experiments to verify the theoretical results.Keywords: reaction diffusion system, finite element approximation, fixed point theorem, an optimal error bound
Procedia PDF Downloads 53417037 Performance Evaluation of Content Based Image Retrieval Using Indexed Views
Authors: Tahir Iqbal, Mumtaz Ali, Syed Wajahat Kareem, Muhammad Harris
Abstract:
Digital information is expanding in exponential order in our life. Information that is residing online and offline are stored in huge repositories relating to every aspect of our lives. Getting the required information is a task of retrieval systems. Content based image retrieval (CBIR) is a retrieval system that retrieves the required information from repositories on the basis of the contents of the image. Time is a critical factor in retrieval system and using indexed views with CBIR system improves the time efficiency of retrieved results.Keywords: content based image retrieval (CBIR), indexed view, color, image retrieval, cross correlation
Procedia PDF Downloads 47017036 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images
Authors: Shenlun Chen, Leonard Wee
Abstract:
Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.Keywords: colorectal cancer, differentiation, survival analysis, tumor grading
Procedia PDF Downloads 13417035 An Effective and Efficient Web Platform for Monitoring, Control, and Management of Drones Supported by a Microservices Approach
Authors: Jorge R. Santos, Pedro Sebastiao
Abstract:
In recent years there has been a great growth in the use of drones, being used in several areas such as security, agriculture, or research. The existence of some systems that allow the remote control of drones is a reality; however, these systems are quite simple and directed to specific functionality. This paper proposes the development of a web platform made in Vue.js and Node.js to control, manage, and monitor drones in real time. Using a microservice architecture, the proposed project will be able to integrate algorithms that allow the optimization of processes. Communication with remote devices is suggested via HTTP through 3G, 4G, and 5G networks and can be done in real time or by scheduling routes. This paper addresses the case of forest fires as one of the services that could be included in a system similar to the one presented. The results obtained with the elaboration of this project were a success. The communication between the web platform and drones allowed its remote control and monitoring. The incorporation of the fire detection algorithm in the platform proved possible a real time analysis of the images captured by the drone without human intervention. The proposed system has proved to be an asset to the use of drones in fire detection. The architecture of the application developed allows other algorithms to be implemented, obtaining a more complex application with clear expansion.Keywords: drone control, microservices, node.js, unmanned aerial vehicles, vue.js
Procedia PDF Downloads 14817034 Analysis of Collision Avoidance System
Authors: N. Gayathri Devi, K. Batri
Abstract:
The advent of technology has increased the traffic hazards and the road accidents take place. Collision detection system in automobile aims at reducing or mitigating the severity of an accident. This project aims at avoiding Vehicle head on collision by means of collision detection algorithm. This collision detection algorithm predicts the collision and the avoidance or minimization have to be done within few seconds on confirmation. Under critical situation collision minimization is made possible by turning the vehicle to the desired turn radius so that collision impact can be reduced. In order to avoid the collision completely, the turning of the vehicle should be achieved at reduced speed in order to maintain the stability.Keywords: collision avoidance system, time to collision, time to turn, turn radius
Procedia PDF Downloads 54917033 A Linearly Scalable Family of Swapped Networks
Authors: Richard Draper
Abstract:
A supercomputer can be constructed from identical building blocks which are small parallel processors connected by a network referred to as the local network. The routers have unused ports which are used to interconnect the building blocks. These connections are referred to as the global network. The address space has a global and a local component (g, l). The conventional way to connect the building blocks is to connect (g, l) to (g’,l). If there are K blocks, this requires K global ports in each router. If a block is of size M, the result is a machine with KM routers having diameter two. To increase the size of the machine to 2K blocks, each router connects to only half of the other blocks. The result is a larger machine but also one with greater diameter. This is a crude description of how the network of the CRAY XC® is designed. In this paper, a family of interconnection networks using routers with K global and M local ports is defined. Coordinates are (c,d, p) and the global connections are (c,d,p)↔(c’,p,d) which swaps p and d. The network is denoted D3(K,M) and is called a Swapped Dragonfly. D3(K,M) has KM2 routers and has diameter three, regardless of the size of K. To produce a network of size KM2 conventionally, diameter would be an increasing function of K. The family of Swapped Dragonflies has other desirable properties: 1) D3(K,M) scales linearly in K and quadratically in M. 2) If L < K, D3(K,M) contains many copies of D3(L,M). 3) If L < M, D3(K,M) contains many copies of D3(K,L). 4) D3(K,M) can perform an all-to-all exchange in KM2+KM time which is only slightly more than the time to do a one-to-all. This paper makes several contributions. It is the first time that a swap has been used to define a linearly scalable family of networks. Structural properties of this new family of networks are thoroughly examined. A synchronizing packet header is introduced. It specifies the path to be followed and it makes it possible to define highly parallel communication algorithm on the network. Among these is an all-to-all exchange in time KM2+KM. To demonstrate the effectiveness of the swap properties of the network of the CRAY XC® and D3(K,16) are compared.Keywords: all-to-all exchange, CRAY XC®, Dragonfly, interconnection network, packet switching, swapped network, topology
Procedia PDF Downloads 12217032 Stock Movement Prediction Using Price Factor and Deep Learning
Abstract:
The development of machine learning methods and techniques has opened doors for investigation in many areas such as medicines, economics, finance, etc. One active research area involving machine learning is stock market prediction. This research paper tries to consider multiple techniques and methods for stock movement prediction using historical price or price factors. The paper explores the effectiveness of some deep learning frameworks for forecasting stock. Moreover, an architecture (TimeStock) is proposed which takes the representation of time into account apart from the price information itself. Our model achieves a promising result that shows a potential approach for the stock movement prediction problem.Keywords: classification, machine learning, time representation, stock prediction
Procedia PDF Downloads 14717031 Optimization of the Self-Recognition Direct Digital Radiology Technology by Applying the Density Detector Sensors
Authors: M. Dabirinezhad, M. Bayat Pour, A. Dabirinejad
Abstract:
In 2020, the technology was introduced to solve some of the deficiencies of direct digital radiology. SDDR is an invention that is capable of capturing dental images without human intervention, and it was invented by the authors of this paper. Adjusting the radiology wave dose is a part of the dentists, radiologists, and dental nurses’ tasks during the radiology photography process. In this paper, an improvement will be added to enable SDDR to set the suitable radiology wave dose according to the density and age of the patients automatically. The separate sensors will be included in the sensors’ package to use the ultrasonic wave to detect the density of the teeth and change the wave dose. It facilitates the process of dental photography in terms of time and enhances the accuracy of choosing the correct wave dose for each patient separately. Since the radiology waves are well known to trigger off other diseases such as cancer, choosing the most suitable wave dose can be helpful to decrease the side effect of that for human health. In other words, it decreases the exposure time for the patients. On the other hand, due to saving time, less energy will be consumed, and saving energy can be beneficial to decrease the environmental impact as well.Keywords: dental direct digital imaging, environmental impacts, SDDR technology, wave dose
Procedia PDF Downloads 19417030 Real-Time Path Planning for Unmanned Air Vehicles Using Improved Rapidly-Exploring Random Tree and Iterative Trajectory Optimization
Authors: A. Ramalho, L. Romeiro, R. Ventura, A. Suleman
Abstract:
A real-time path planning framework for Unmanned Air Vehicles, and in particular multi-rotors is proposed. The framework is designed to provide feasible trajectories from the current UAV position to a goal state, taking into account constraints such as obstacle avoidance, problem kinematics, and vehicle limitations such as maximum speed and maximum acceleration. The framework computes feasible paths online, allowing to avoid new, unknown, dynamic obstacles without fully re-computing the trajectory. These features are achieved using an iterative process in which the robot computes and optimizes the trajectory while performing the mission objectives. A first trajectory is computed using a modified Rapidly-Exploring Random Tree (RRT) algorithm, that provides trajectories that respect a maximum curvature constraint. The trajectory optimization is accomplished using the Interior Point Optimizer (IPOPT) as a solver. The framework has proven to be able to compute a trajectory and optimize to a locally optimal with computational efficiency making it feasible for real-time operations.Keywords: interior point optimization, multi-rotors, online path planning, rapidly exploring random trees, trajectory optimization
Procedia PDF Downloads 13517029 Determinaton of Processing Parameters of Decaffeinated Black Tea by Using Pilot-Scale Supercritical CO₂ Extraction
Authors: Saziye Ilgaz, Atilla Polat
Abstract:
There is a need for development of new processing techniques to ensure safety and quality of final product while minimizing the adverse impact of extraction solvents on environment and residue levels of these solvents in final product, decaffeinated black tea. In this study pilot scale supercritical carbon dioxide (SCCO₂) extraction was used to produce decaffeinated black tea in place of solvent extraction. Pressure (250, 375, 500 bar), extraction time (60, 180, 300 min), temperature (55, 62.5, 70 °C), CO₂ flow rate (1, 2 ,3 LPM) and co-solvent quantity (0, 2.5, 5 %mol) were selected as extraction parameters. The five factors BoxBehnken experimental design with three center points was performed to generate 46 different processing conditions for caffeine removal from black tea samples. As a result of these 46 experiments caffeine content of black tea samples were reduced from 2.16 % to 0 – 1.81 %. The experiments showed that extraction time, pressure, CO₂ flow rate and co-solvent quantity had great impact on decaffeination yield. Response surface methodology (RSM) was used to optimize the parameters of the supercritical carbon dioxide extraction. Optimum extraction parameters obtained of decaffeinated black tea were as follows: extraction temperature of 62,5 °C, extraction pressure of 375 bar, CO₂ flow rate of 3 LPM, extraction time of 176.5 min and co-solvent quantity of 5 %mol.Keywords: supercritical carbon dioxide, decaffeination, black tea, extraction
Procedia PDF Downloads 364