Search results for: real time simulator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20546

Search results for: real time simulator

18656 Agegraphic Dark Energy with GUP

Authors: H. R. Fazlollahi

Abstract:

Dark Energy origin is unknown and so describing this mysterious component in large scale structure needs to manipulate our theories in general relativity. Although in most models, dark energy arises from extra terms through modifying Einstein-Hilbert action, maybe its origin traces back to fundamental aspects of ground energy of space-time given in quantum mechanics. Hence, diluting space-time in general relativity with quantum mechanics properties leads to the Karolyhazy relation corresponding energy density of quantum fluctuations of space-time. Through generalized uncertainty principle and an eye to Karolyhazy approach in this study we extend energy density of quantum fluctuations of space-time. Also, the application of this idea is considered in late time evolution and we have shown how extra term in generalized uncertainty principle plays as a plausible interaction term role in suggested model.

Keywords: generalized uncertainty principle, karolyhazy approach, agegraphic dark energy, cosmology

Procedia PDF Downloads 62
18655 Split-Flow Method to Reduce Duty Required in Amine Gas Sweetening Units

Authors: Abdallah Sofiane Berrouk, Dara Satyadileep

Abstract:

This paper investigates the feasibility of retrofitting a middle-east based commercial amine sweetening unit with a split-flow scheme which involves withdrawing a portion of partially stripped semi-lean solvent from the stripping column and re-injecting it in the absorption column to reduce the overall energy consumption of the unit. This method is comprehensively explored by performing parametric analysis of the split fraction of the semi-lean solvent using a kinetics based process simulator ProMax V 3.2. Re-boiler duty, condenser duty, solvent cooling and pumping loads are analysed as functions of a split fraction of the semi-lean solvent from the stripper. It is shown that the proposed method significantly reduces the overall energy consumption of the unit resulting in an annual savings of 325,000 USD. The thorough economic analysis is performed using Aspen Economic Evaluation V 8.4 to reveal that the retrofit scheme pays back the capital cost in less than eight years and is highly recommended for any commercial plant having suitable provisions for solvent inlet/withdrawal on the columns.

Keywords: split flow, Amine, gas processing, optimization

Procedia PDF Downloads 314
18654 A Study on Long Life Hybrid Battery System Consists of Ni-63 Betavoltaic Battery and All Solid Battery

Authors: Bosung Kim, Youngmok Yun, Sungho Lee, Chanseok Park

Abstract:

There is a limitation to power supply and operation by the chemical or physical battery in the space environment. Therefore, research for utilizing nuclear energy in the universe has been in progress since the 1950s, around the major industrialized countries. In this study, the self-rechargeable battery having a long life relative to the half-life of the radioisotope is suggested. The hybrid system is composed of betavoltaic battery, all solid battery and energy harvesting board. Betavoltaic battery can produce electrical power at least 10 years over using the radioisotope from Ni-63 and the silicon-based semiconductor. The electrical power generated from the betavoltaic battery is stored in the all-solid battery and stored power is used if necessary. The hybrid system board is composed of input terminals, boost circuit, charging terminals and output terminals. Betavoltaic and all solid batteries are connected to the input and output terminal, respectively. The electric current of 10 µA is applied to the system board by using the high-resolution power simulator. The system efficiencies are measured from a boost up voltage of 1.8 V, 2.4 V and 3 V, respectively. As a result, the efficiency of system board is about 75% after boosting up the voltage from 1V to 3V.

Keywords: isotope, betavoltaic, nuclear, battery, energy harvesting

Procedia PDF Downloads 310
18653 Effect of Common Yoga Protocol on Reaction Time of Football Players

Authors: Vikram Singh

Abstract:

The objective of the study was to study the effectiveness of common yoga protocol on reaction time (simple visual reaction time-SVRT measured in milliseconds/seconds) of male football players in the age group of 15 to 21 years. The 40 boys were randomly assigned into two groups i.e. control and experimental. SVRT for both the groups were measured on day-1 and post intervention (common yoga protocol here) was measured after 45 days of training to the experimental group only. One way ANOVA (Univariate analysis) and Independent t-test using SPSS 23 statistical package was applied to get and analyze the results. There was a significant difference after 45 days of yoga protocol in simple visual reaction time of experimental group (p = .032), t (33.05) = 3.881, p = .000 (two-tailed). Null hypothesis (that there would be no post measurement differences in reaction times of control and experimental groups) was rejected. Where p<.05. Therefore alternate hypothesis was accepted.

Keywords: footballers, t-test, yoga protocol, reaction time

Procedia PDF Downloads 243
18652 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 62
18651 Bidirectional Dynamic Time Warping Algorithm for the Recognition of Isolated Words Impacted by Transient Noise Pulses

Authors: G. Tamulevičius, A. Serackis, T. Sledevič, D. Navakauskas

Abstract:

We consider the biggest challenge in speech recognition – noise reduction. Traditionally detected transient noise pulses are removed with the corrupted speech using pulse models. In this paper we propose to cope with the problem directly in Dynamic Time Warping domain. Bidirectional Dynamic Time Warping algorithm for the recognition of isolated words impacted by transient noise pulses is proposed. It uses simple transient noise pulse detector, employs bidirectional computation of dynamic time warping and directly manipulates with warping results. Experimental investigation with several alternative solutions confirms effectiveness of the proposed algorithm in the reduction of impact of noise on recognition process – 3.9% increase of the noisy speech recognition is achieved.

Keywords: transient noise pulses, noise reduction, dynamic time warping, speech recognition

Procedia PDF Downloads 541
18650 An Improved Approach to Solve Two-Level Hierarchical Time Minimization Transportation Problem

Authors: Kalpana Dahiya

Abstract:

This paper discusses a two-level hierarchical time minimization transportation problem, which is an important class of transportation problems arising in industries. This problem has been studied by various researchers, and a number of polynomial time iterative algorithms are available to find its solution. All the existing algorithms, though efficient, have some shortcomings. The current study proposes an alternate solution algorithm for the problem that is more efficient in terms of computational time than the existing algorithms. The results justifying the underlying theory of the proposed algorithm are given. Further, a detailed comparison of the computational behaviour of all the algorithms for randomly generated instances of this problem of different sizes validates the efficiency of the proposed algorithm.

Keywords: global optimization, hierarchical optimization, transportation problem, concave minimization

Procedia PDF Downloads 150
18649 Expansive-Restrictive Style: Conceptualizing Knowledge Workers

Authors: Ram Manohar Singh, Meenakshi Gupta

Abstract:

Various terms such as ‘learning style’, ‘cognitive style’, ‘conceptual style’, ‘thinking style’, ‘intellectual style’ are used in literature to refer to an individual’s characteristic and consistent approach to organizing and processing information. However, style concepts are criticized for mutually overlapping definitions and confusing classification. This confusion should be addressed at the conceptual as well as empirical level. This paper is an attempt to bridge this gap in literature by proposing a new concept: expansive-restrictive intellectual style based on phenomenological analysis of an auto-ethnography and interview of 26 information technology (IT) professionals working in knowledge intensive organizations (KIOs) in India. Expansive style is an individual’s preference to expand his/her horizon of knowledge and understanding by gaining real meaning and structure of his/her work. On the contrary restrictive style is characterized by an individual’s preference to take minimalist approach at work reflected in executing a job efficiently without an attempt to understand the real meaning and structure of the work. The analysis suggests that expansive-restrictive style has three dimensions: (1) field dependence-independence (2) cognitive involvement and (3) epistemological beliefs.

Keywords: expansive, knowledge workers, restrictive, style

Procedia PDF Downloads 408
18648 Earthquakes and Buildings: Lesson Learnt from Past Earthquakes in Turkey

Authors: Yavuz Yardım

Abstract:

The most important criteria for structural engineering is the structure’s ability to carry intended loads safely. The key element of this ability is mathematical modeling of really loadings situation into a simple loads input to use in structure analysis and design. Amongst many different types of loads, the most challenging load is earthquake load. It is possible magnitude is unclear and timing is unknown. Therefore the concept of intended loads and safety have been built on experience of previous earthquake impact on the structures. Understanding and developing these concepts is achieved by investigating performance of the structures after real earthquakes. Damage after an earthquake provide results of thousands of full-scale structure test under a real seismic load. Thus, Earthquakes reveille all the weakness, mistakes and deficiencies of analysis, design rules and practice. This study deals with lesson learnt from earthquake recoded last two decades in Turkey. Results of investigation after several earthquakes exposes many deficiencies in structural detailing, inappropriate design, wrong architecture layout, and mainly mistake in construction practice.

Keywords: earthquake, seismic assessment, RC buildings, building performance

Procedia PDF Downloads 252
18647 Object-Centric Process Mining Using Process Cubes

Authors: Anahita Farhang Ghahfarokhi, Alessandro Berti, Wil M.P. van der Aalst

Abstract:

Process mining provides ways to analyze business processes. Common process mining techniques consider the process as a whole. However, in real-life business processes, different behaviors exist that make the overall process too complex to interpret. Process comparison is a branch of process mining that isolates different behaviors of the process from each other by using process cubes. Process cubes organize event data using different dimensions. Each cell contains a set of events that can be used as an input to apply process mining techniques. Existing work on process cubes assume single case notions. However, in real processes, several case notions (e.g., order, item, package, etc.) are intertwined. Object-centric process mining is a new branch of process mining addressing multiple case notions in a process. To make a bridge between object-centric process mining and process comparison, we propose a process cube framework, which supports process cube operations such as slice and dice on object-centric event logs. To facilitate the comparison, the framework is integrated with several object-centric process discovery approaches.

Keywords: multidimensional process mining, mMulti-perspective business processes, OLAP, process cubes, process discovery, process mining

Procedia PDF Downloads 240
18646 Primary-Color Emitting Photon Energy Storage Nanophosphors for Developing High Contrast Latent Fingerprints

Authors: G. Swati, D. Haranath

Abstract:

Commercially available long afterglow /persistent phosphors are proprietary materials and hence the exact composition and phase responsible for their luminescent characteristics such as initial intensity and afterglow luminescence time are not known. Further to generate various emission colors, commercially available persistence phosphors are physically blended with fluorescent organic dyes such as rodhamine, kiton and methylene blue etc. Blending phosphors with organic dyes results into complete color coverage in visible spectra, however with time, such phosphors undergo thermal and photo-bleaching. This results in the loss of their true emission color. Hence, the current work is dedicated studies on inorganic based thermally and chemically stable primary color emitting nanophosphors namely SrAl2O4:Eu2+, Dy3+, (CaZn)TiO3:Pr3+, and Sr2MgSi2O7:Eu2+, Dy3+. SrAl2O4: Eu2+, Dy3+ phosphor exhibits a strong excitation in UV and visible region (280-470 nm) with a broad emission peak centered at 514 nm is the characteristic emission of parity allowed 4f65d1→4f7 transitions of Eu2+ (8S7/2→2D5/2). Sunlight excitable Sr2MgSi2O7:Eu2+,Dy3+ nanophosphors emits blue color (464 nm) with Commercial international de I’Eclairage (CIE) coordinates to be (0.15, 0.13) with a color purity of 74 % with afterglow time of > 5 hours for dark adapted human eyes. (CaZn)TiO3:Pr3+ phosphor system possess high color purity (98%) which emits intense, stable and narrow red emission at 612 nm due intra 4f transitions (1D2 → 3H4) with afterglow time of 0.5 hour. Unusual property of persistence luminescence of these nanophoshphors supersedes background effects without losing sensitive information these nanophosphors offer several advantages of visible light excitation, negligible substrate interference, high contrast bifurcation of ridge pattern, non-toxic nature revealing finger ridge details of the fingerprints. Both level 1 and level 2 features from a fingerprint can be studied which are useful for used classification, indexing, comparison and personal identification. facile methodology to extract high contrast fingerprints on non-porous and porous substrates using a chemically inert, visible light excitable, and nanosized phosphorescent label in the dark has been presented. The chemistry of non-covalent physisorption interaction between the long afterglow phosphor powder and sweat residue in fingerprints has been discussed in detail. Real-time fingerprint development on porous and non-porous substrates has also been performed. To conclude, apart from conventional dark vision applications, as prepared primary color emitting afterglow phosphors are potentional candidate for developing high contrast latent fingerprints.

Keywords: fingerprints, luminescence, persistent phosphors, rare earth

Procedia PDF Downloads 194
18645 GPU Based High Speed Error Protection for Watermarked Medical Image Transmission

Authors: Md Shohidul Islam, Jongmyon Kim, Ui-pil Chong

Abstract:

Medical image is an integral part of e-health care and e-diagnosis system. Medical image watermarking is widely used to protect patients’ information from malicious alteration and manipulation. The watermarked medical images are transmitted over the internet among patients, primary and referred physicians. The images are highly prone to corruption in the wireless transmission medium due to various noises, deflection, and refractions. Distortion in the received images leads to faulty watermark detection and inappropriate disease diagnosis. To address the issue, this paper utilizes error correction code (ECC) with (8, 4) Hamming code in an existing watermarking system. In addition, we implement the high complex ECC on a graphics processing units (GPU) to accelerate and support real-time requirement. Experimental results show that GPU achieves considerable speedup over the sequential CPU implementation, while maintaining 100% ECC efficiency.

Keywords: medical image watermarking, e-health system, error correction, Hamming code, GPU

Procedia PDF Downloads 276
18644 Quantitative Analysis of Multiprocessor Architectures for Radar Signal Processing

Authors: Deepak Kumar, Debasish Deb, Reena Mamgain

Abstract:

Radar signal processing requires high number crunching capability. Most often this is achieved using multiprocessor platform. Though multiprocessor platform provides the capability of meeting the real time computational challenges, the architecture of the same along with mapping of the algorithm on the architecture plays a vital role in efficiently using the platform. Towards this, along with standard performance metrics, few additional metrics are defined which helps in evaluating the multiprocessor platform along with the algorithm mapping. A generic multiprocessor architecture can not suit all the processing requirements. Depending on the system requirement and type of algorithms used, the most suitable architecture for the given problem is decided. In the paper, we study different architectures and quantify the different performance metrics which enables comparison of different architectures for their merit. We also carried out case study of different architectures and their efficiency depending on parallelism exploited on algorithm or data or both.

Keywords: radar signal processing, multiprocessor architecture, efficiency, load imbalance, buffer requirement, pipeline, parallel, hybrid, cluster of processors (COPs)

Procedia PDF Downloads 401
18643 The Effect of User Comments on Traffic Application Usage

Authors: I. Gokasar, G. Bakioglu

Abstract:

With the unprecedented rates of technological improvements, people start to solve their problems with the help of technological tools. According to application stores and websites in which people evaluate and comment on the traffic apps, there are more than 100 traffic applications which have different features with respect to their purpose of usage ranging from the features of traffic apps for public transit modes to the features of traffic apps for private cars. This study focuses on the top 30 traffic applications which were chosen with respect to their download counts. All data about the traffic applications were obtained from related websites. The purpose of this study is to analyze traffic applications in terms of their categorical attributes with the help of developing a regression model. The analysis results suggest that negative interpretations (e.g., being deficient) does not lead to lower star ratings of the applications. However, those negative interpretations result in a smaller increase in star rate. In addition, women use higher star rates than men for the evaluation of traffic applications.

Keywords: traffic app, real–time information, traffic congestion, regression analysis, dummy variables

Procedia PDF Downloads 412
18642 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model

Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini

Abstract:

The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.

Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures

Procedia PDF Downloads 109
18641 Design and Implementation of an AI-Enabled Task Assistance and Management System

Authors: Arun Prasad Jaganathan

Abstract:

In today's dynamic industrial world, traditional task allocation methods often fall short in adapting to evolving operational conditions. This paper introduces an AI-enabled task assistance and management system designed to overcome the limitations of conventional approaches. By using artificial intelligence (AI) and machine learning (ML), the system intelligently interprets user instructions, analyzes tasks, and allocates resources based on real-time data and environmental factors. Additionally, geolocation tracking enables proactive identification of potential delays, ensuring timely interventions. With its transparent reporting mechanisms, the system provides stakeholders with clear insights into task progress, fostering accountability and informed decision-making. The paper presents a comprehensive overview of the system architecture, algorithm, and implementation, highlighting its potential to revolutionize task management across diverse industries.

Keywords: artificial intelligence, machine learning, task allocation, operational efficiency, resource optimization

Procedia PDF Downloads 36
18640 Rescaled Range Analysis of Seismic Time-Series: Example of the Recent Seismic Crisis of Alhoceima

Authors: Marina Benito-Parejo, Raul Perez-Lopez, Miguel Herraiz, Carolina Guardiola-Albert, Cesar Martinez

Abstract:

Persistency, long-term memory and randomness are intrinsic properties of time-series of earthquakes. The Rescaled Range Analysis (RS-Analysis) was introduced by Hurst in 1956 and modified by Mandelbrot and Wallis in 1964. This method represents a simple and elegant analysis which determines the range of variation of one natural property (the seismic energy released in this case) in a time interval. Despite the simplicity, there is complexity inherent in the property measured. The cumulative curve of the energy released in time is the well-known fractal geometry of a devil’s staircase. This geometry is used for determining the maximum and minimum value of the range, which is normalized by the standard deviation. The rescaled range obtained obeys a power-law with the time, and the exponent is the Hurst value. Depending on this value, time-series can be classified in long-term or short-term memory. Hence, an algorithm has been developed for compiling the RS-Analysis for time series of earthquakes by days. Completeness time distribution and locally stationarity of the time series are required. The interest of this analysis is their application for a complex seismic crisis where different earthquakes take place in clusters in a short period. Therefore, the Hurst exponent has been obtained for the seismic crisis of Alhoceima (Mediterranean Sea) of January-March, 2016, where at least five medium-sized earthquakes were triggered. According to the values obtained from the Hurst exponent for each cluster, a different mechanical origin can be detected, corroborated by the focal mechanisms calculated by the official institutions. Therefore, this type of analysis not only allows an approach to a greater understanding of a seismic series but also makes possible to discern different types of seismic origins.

Keywords: Alhoceima crisis, earthquake time series, Hurst exponent, rescaled range analysis

Procedia PDF Downloads 308
18639 Close-Range Remote Sensing Techniques for Analyzing Rock Discontinuity Properties

Authors: Sina Fatolahzadeh, Sergio A. Sepúlveda

Abstract:

This paper presents advanced developments in close-range, terrestrial remote sensing techniques to enhance the characterization of rock masses. The study integrates two state-of-the-art laser-scanning technologies, the HandySCAN and GeoSLAM laser scanners, to extract high-resolution geospatial data for rock mass analysis. These instruments offer high accuracy, precision, low acquisition time, and high efficiency in capturing intricate geological features in small to medium size outcrops and slope cuts. Using the HandySCAN and GeoSLAM laser scanners facilitates real-time, three-dimensional mapping of rock surfaces, enabling comprehensive assessments of rock mass characteristics. The collected data provide valuable insights into structural complexities, surface roughness, and discontinuity patterns, which are essential for geological and geotechnical analyses. The synergy of these advanced remote sensing technologies contributes to a more precise and straightforward understanding of rock mass behavior. In this case, the main parameters of RQD, joint spacing, persistence, aperture, roughness, infill, weathering, water condition, and joint orientation in a slope cut along the Sea-to-Sky Highway, BC, were remotely analyzed to calculate and evaluate the Rock Mass Rating (RMR) and Geological Strength Index (GSI) classification systems. Automatic and manual analyses of the acquired data are then compared with field measurements. The results show the usefulness of the proposed remote sensing methods and their appropriate conformity with the actual field data.

Keywords: remote sensing, rock mechanics, rock engineering, slope stability, discontinuity properties

Procedia PDF Downloads 49
18638 A Genetic Algorithm Approach for Multi Constraint Team Orienteering Problem with Time Windows

Authors: Uyanga Sukhbaatar, Ahmed Lbath, Mendamar Majig

Abstract:

The Orienteering Problem is the most known example to start modeling tourist trip design problem. In order to meet tourist’s interest and constraint the OP is becoming more and more complicate to solve. The Multi Constraint Team Orienteering Problem with Time Windows is the last extension of the OP which differentiates from other extensions by including more extra associated constraints. The goal of the MCTOPTW is maximizing tourist’s satisfaction score in same time not to violate any of these constraints. This paper presents a genetic algorithmic approach to tackle the MCTOPTW. The benchmark data from literature is tested by our algorithm and the performance results are compared.

Keywords: multi constraint team orienteering problem with time windows, genetic algorithm, tour planning system

Procedia PDF Downloads 612
18637 Reducing Power Consumption in Network on Chip Using Scramble Techniques

Authors: Vinayaga Jagadessh Raja, R. Ganesan, S. Ramesh Kumar

Abstract:

An ever more significant fraction of the overall power dissipation of a network-on-chip (NoC) based system on- chip (SoC) is due to the interconnection scheme. In information, as equipment shrinks, the power contributes of NoC links starts to compete with that of NoC routers. In this paper, we propose the use of clock gating in the data encoding techniques as a viable way to reduce both power dissipation and time consumption of NoC links. The projected scramble scheme exploits the wormhole switching techniques. That is, flits are scramble by the network interface (NI) before they are injected in the network and are decoded by the target NI. This makes the scheme transparent to the underlying network since the encoder and decoder logic is integrated in the NI and no modification of the routers structural design is required. We review the projected scramble scheme on a set of representative data streams (both synthetic and extracted from real applications) showing that it is possible to reduce the power contribution of both the self-switching activity and the coupling switching activity in inter-routers links.

Keywords: Xilinx 12.1, power consumption, Encoder, NOC

Procedia PDF Downloads 391
18636 A Wearable Fluorescence Imaging Device for Intraoperative Identification of Human Brain Tumors

Authors: Guoqiang Yu, Mehrana Mohtasebi, Jinghong Sun, Thomas Pittman

Abstract:

Malignant glioma (MG) is the most common type of primary malignant brain tumor. Surgical resection of MG remains the cornerstone of therapy, and the extent of resection correlates with patient survival. A limiting factor for resection, however, is the difficulty in differentiating the tumor from normal tissue during surgery. Fluorescence imaging is an emerging technique for real-time intraoperative visualization of MGs and their boundaries. However, most clinical-grade neurosurgical operative microscopes with fluorescence imaging ability are hampered by low adoption rates due to high cost, limited portability, limited operation flexibility, and lack of skilled professionals with technical knowledge. To overcome the limitations, we innovatively integrated miniaturized light sources, flippable filters, and a recording camera to the surgical eye loupes to generate a wearable fluorescence eye loupe (FLoupe) device for intraoperative imaging of fluorescent MGs. Two FLoupe prototypes were constructed for imaging of Fluorescein and 5-aminolevulinic acid (5-ALA), respectively. The wearable FLoupe devices were tested on tumor-simulating phantoms and patients with MGs. Comparable results were observed against the standard neurosurgical operative microscope (PENTERO® 900) with fluorescence kits. The affordable and wearable FLoupe devices enable visualization of both color and fluorescence images with the same quality as the large and expensive stationary operative microscopes. The wearable FLoupe device allows for a greater range of movement, less obstruction, and faster/easier operation. Thus, it reduces surgery time and is more easily adapted to the surgical environment than unwieldy neurosurgical operative microscopes.

Keywords: fluorescence guided surgery, malignant glioma, neurosurgical operative microscope, wearable fluorescence imaging device

Procedia PDF Downloads 48
18635 Microwave Assisted Extraction (MAE) of Castor Oil from Castor Bean

Authors: Ghazi Faisal Najmuldeen, Rosli Mohd Yunus, Nurfarahin Bt Harun, Mardhiana Binti Ismail

Abstract:

The microwave extraction has attracted great interest among the researchers. The main virtue of the microwave technique is cost-effective, time saving and simple handling procedure. Castor beans was chosen because of its high content in fatty acid, especially ricinoleic acid. The purpose of this research is to extract the castor oil by using the microwave assisted extraction (MAE) using ethanol as solvent and to investigate the influence of extraction time on castor oil yield and to characterize the main composition of the produced castor oil by using the GC-MS. It was found that there is a direct dependence between the oil yield and the time of extraction as it increases from 45% to 58% as the time increase from 10 min to 60 min. The major components of castor oil detected by GC-MS were ricinoleic acid, linoleic acid and oleic acid.

Keywords: microwave assisted extraction (MAE), castor oil, ricinoleic acid, linoleic acid

Procedia PDF Downloads 489
18634 Advantages and Disadvantages of Distance Learning in Comparison with Full-time Teaching from the Perspective of Chinese University Students

Authors: Daniel Ecler

Abstract:

The aim of this paper was to find out how Chinese university students perceive distance learning compared to full-time teaching, to reveal its advantages and disadvantages, and to try to find what elements could be implemented in regular full-time teaching in order to make it more effective. Recent events have shown that online teaching has a significant role to play in the field of education and needs to be given increased attention and scrutiny. For this purpose, a research survey was conducted using semi-structured questionnaires, which aimed to determine the attitudes of Chinese university students to the phenomenon of distance learning. The results of this survey revealed that most students prefer distance learning to full-time teaching, mainly because it gives them more freedom to participate in teaching, regardless of the environment in which they are currently located. In conclusion, it is necessary to mention that the possibility to participate virtually in teaching from anywhere is a huge advantage that could become part of regular teaching in the future. However, further research into this issue will be necessary.

Keywords: distance learning, full-time teaching, Chinese college students, cultural background

Procedia PDF Downloads 164
18633 'Performance-Based' Seismic Methodology and Its Application in Seismic Design of Reinforced Concrete Structures

Authors: Jelena R. Pejović, Nina N. Serdar

Abstract:

This paper presents an analysis of the “Performance-Based” seismic design method, in order to overcome the perceived disadvantages and limitations of the existing seismic design approach based on force, in engineering practice. Bearing in mind, the specificity of the earthquake as a load and the fact that the seismic resistance of the structures solely depends on its behaviour in the nonlinear field, traditional seismic design approach based on force and linear analysis is not adequate. “Performance-Based” seismic design method is based on nonlinear analysis and can be used in everyday engineering practice. This paper presents the application of this method to eight-story high reinforced concrete building with combined structural system (reinforced concrete frame structural system in one direction and reinforced concrete ductile wall system in other direction). The nonlinear time-history analysis is performed on the spatial model of the structure using program Perform 3D, where the structure is exposed to forty real earthquake records. For considered building, large number of results were obtained. It was concluded that using this method we could, with a high degree of reliability, evaluate structural behavior under earthquake. It is obtained significant differences in the response of structures to various earthquake records. Also analysis showed that frame structural system had not performed well at the effect of earthquake records on soil like sand and gravel, while a ductile wall system had a satisfactory behavior on different types of soils.

Keywords: ductile wall, frame system, nonlinear time-history analysis, performance-based methodology, RC building

Procedia PDF Downloads 356
18632 Genotyping of G/P No Typable Group a Rotavirus Strains Revealed G2 and G9 Genotype Circulations in Moroccan Children Fully Vaccinated with Rotarix™

Authors: H. Boulahyaoui, S. Alaoui Amine, C. Loutfi, H. El Annaz, N. Touil, El M. El Fahim, S. Mrani

Abstract:

Three Moroccan children fully vaccinated with Rotarix™ have been hospitalized for Rotavirus Gastroenteritis (RVGE) in the pediatric division of the Farabi Hospital, Oujda. Rotavirus G/P genotypes could not be typed because of their delayed crossing threshold (Ct) resolute with a group A rotavirus (RVA) real time RT-PCR. These strains were adapted to cell culture. All viruses replicated and caused extensive cytopathic effects after four or five passages in MA104 cell lines. Significant improvements have been obtained in the amount of viral particles. Each virus multiplied to a high titer (7.5 TCID50/ml). VP7 and VP4 partial gene sequencing revealed distinct genotypes compared to the Rotarix(®) vaccine strain. Two strains were of G2P[4] genotype whereas the third was G9P[8] genotype. Virus isolation while labor intensive, is recommended as a second test, especially when higher sensitivity for conventional RVA genotyping RT-PCR is needed. VP7 antigenic similarities between these strains and Rotarix were determined.

Keywords: esacpe-vaccine, Morocco, Rotarix, G2P[4], G9P[8]

Procedia PDF Downloads 318
18631 D6tions: A Serious Game to Learn Software Engineering Process and Design

Authors: Hector G. Perez-Gonzalez, Miriam Vazquez-Escalante, Sandra E. Nava-Muñoz, 
 Francisco E. Martinez-Perez, Alberto S. Nunez-Varela

Abstract:

The software engineering teaching process has been the subject of many studies. To improve this process, researchers have proposed merely illustrative techniques in the classroom, such as topic presentations and dynamics between students on one side or attempts to involve students in real projects with companies and institutions to bring them to a real software development problem on the other hand. Simulators and serious games have been used as auxiliary tools to introduce students to topics that are too abstract when these are presented in the traditional way. Most of these tools cover a limited area of the huge software engineering scope. To address this problem, we have developed D6tions, an educational serious game that simulates the software engineering process and is designed to experiment the different stages a software engineer (playing roles as project leader or as a developer or designer) goes through, while participating in a software project. We describe previous approaches to this problem, how D6tions was designed, its rules, directions, and the results we obtained of the use of this game involving undergraduate students playing the game.

Keywords: serious games, software engineering, software engineering education, software engineering teaching process

Procedia PDF Downloads 478
18630 Modal Density Influence on Modal Complexity Quantification in Dynamic Systems

Authors: Fabrizio Iezzi, Claudio Valente

Abstract:

The viscous damping in dynamic systems can be proportional or non-proportional. In the first case, the mode shapes are real whereas in the second case they are complex. From an engineering point of view, the complexity of the mode shapes is important in order to quantify the non-proportional damping. Different indices exist to provide estimates of the modal complexity. These indices are or not zero, depending whether the mode shapes are not or are complex. The modal density problem arises in the experimental identification when the dynamic systems have close modal frequencies. Depending on the entity of this closeness, the mode shapes can hold fictitious imaginary quantities that affect the values of the modal complexity indices. The results are the failing in the identification of the real or complex mode shapes and then of the proportional or non-proportional damping. The paper aims to show the influence of the modal density on the values of these indices in case of both proportional and non-proportional damping. Theoretical and pseudo-experimental solutions are compared to analyze the problem according to an appropriate mechanical system.

Keywords: complex mode shapes, dynamic systems identification, modal density, non-proportional damping

Procedia PDF Downloads 373
18629 Formal Asymptotic Stability Guarantees, Analysis, and Evaluation of Nonlinear Controlled Unmanned Aerial Vehicle for Trajectory Tracking

Authors: Soheib Fergani

Abstract:

This paper concerns with the formal asymptotic stability guarantees, analysis and evaluation of a nonlinear controlled unmanned aerial vehicles (uav) for trajectory tracking purpose. As the system has been recognised as an under-actuated non linear system, the control strategy has been oriented towards a hierarchical control. The dynamics of the system and the mission purpose make it mandatory to provide an absolute proof of the vehicle stability during the maneuvers. For this sake, this work establishes the complete theoretical proof for an implementable control oriented strategy that asymptotically stabilizes (GAS and LISS) the system and has never been provided in previous works. The considered model is reorganized into two partly decoupled sub-systems. The concidered control strategy is presented into two stages: the first sub-system is controlled by a nonlinear backstepping controller that generates the desired control inputs to stabilize the second sub-system. This methodology is then applied to a harware in the loop uav simulator (SiMoDrones) that reproduces the realistic behaviour of the uav in an indoor environment has been performed to show the efficiency of the proposed strategy.

Keywords: UAV application, trajectory tracking, backstepping, sliding mode control, input to state stability, stability evaluation

Procedia PDF Downloads 45
18628 The Development, Validation, and Evaluation of the Code Blue Simulation Module in Improving the Code Blue Response Time among Nurses

Authors: Siti Rajaah Binti Sayed Sultan

Abstract:

Managing the code blue event is stressful for nurses, the patient, and the patient's families. The rapid response from the first and second responders in the code blue event will improve patient outcomes and prevent tissue hypoxia that leads to brain injury and other organ failures. Providing 1 minute for the cardiac massage and 2 minutes for defibrillation will significantly improve patient outcomes. As we know, the American Heart Association came out with guidelines for managing cardiac arrest patients. The hospital must provide competent staff to manage this situation. It can be achieved when the staff is well equipped with the skill, attitude, and knowledge to manage this situation with well-planned strategies, i.e., clear guidelines for managing the code blue event, competent staff, and functional equipment. The code blue simulation (CBS) was chosen in the training program for code blue management because it can mimic real scenarios. Having the code blue simulation module will allow the staff to appreciate what they will face during the code blue event, especially since it rarely happens in that area. This CBS module training will help the staff familiarize themselves with the activities that happened during actual events and be able to operate the equipment accordingly. Being challenged and independent in managing the code blue in the early phase gives the patient a better outcome. The CBS module will help the assessor and the hospital management team with the proper tools and guidelines for managing the code blue drill accordingly. As we know, prompt action will benefit the patient and their family. It also indirectly increases the confidence and job satisfaction among the nurses, increasing the standard of care, reducing the complication and hospital burden, and enhancing cost-effective care.

Keywords: code blue simulation module, development of code blue simulation module, code blue response time, code blue drill, cardiorespiratory arrest, managing code blue

Procedia PDF Downloads 45
18627 A Study on the Waiting Time for the First Employment of Arts Graduates in Sri Lanka

Authors: Imali T. Jayamanne, K. P. Asoka Ramanayake

Abstract:

Transition from tertiary level education to employment is one of the challenges that many fresh university graduates face after graduation. The transition period or the waiting time to obtain the first employment varies with the socio-economic factors and the general characteristics of a graduate. Compared to other fields of study, Arts graduates in Sri Lanka, have to wait a long time to find their first employment. The objective of this study is to identify the determinants of the transition from higher education to employment of these graduates using survival models. The study is based on a survey that was conducted in the year 2016 on a stratified random sample of Arts graduates from Sri Lankan universities who had graduated in 2012. Among the 469 responses, 36 (8%) waiting times were interval censored and 13 (3%) were right censored. Waiting time for the first employment varied between zero to 51 months. Initially, the log-rank and the Gehan-Wilcoxon tests were performed to identify the significant factors. Gender, ethnicity, GCE Advanced level English grade, civil status, university, class received, degree type, sector of first employment, type of first employment and the educational qualifications required for the first employment were significant at 10%. The Cox proportional hazards model was fitted to model the waiting time for first employment with these significant factors. All factors, except ethnicity and type of employment were significant at 5%. However, since the proportional hazard assumption was violated, the lognormal Accelerated failure time (AFT) model was fitted to model the waiting time for the first employment. The same factors were significant in the AFT model as in Cox proportional model.

Keywords: AFT model, first employment, proportional hazard, survey design, waiting time

Procedia PDF Downloads 298