Search results for: system models
21475 The Continuously Supported Infinity Rail Subjected to a Moving Complex Bogie System
Authors: Vladimir Stojanović, Marko D. Petković
Abstract:
The vibration of a complex bogie system that moves on along the high order shear deformable beam on a viscoelastic foundation is studied. The complex bogie system has been modeled by elastically connected rigid bars on an identical supports. Elastic coupling between bars is introduced to simulate rigidly or flexibly (transversal or/and rotational) connection. Identical supports are modeled as a system of attached spring and dashpot to the bar on one side and interact with the beam through the concentrated mass on the other side. It is assumed that the masses and the beam are always in contact. New analytically determined critical velocity of the system is presented. It is analyzed the case when the complex bogie system exceeds the minimum phase velocity of waves in the beam when the vibration of the system may become unstable. Effect of an elastic coupling between bars on the stability of the system has been analyzed. The instability regions are found for the complex bogie system by applying the principle of the argument and D-decomposition method.Keywords: Reddy-Bickford beam, D-decomposition method, principle of argument, critical velocity
Procedia PDF Downloads 30721474 An Educational Electronic Health Record with a Configurable User Interface
Authors: Floriane Shala, Evangeline Wagner, Yichun Zhao
Abstract:
Background: Proper educational training and support are proven to be major components of EHR (Electronic Health Record) implementation and use. However, the majority of health providers are not sufficiently trained in EHR use, leading to adverse events, errors, and decreased quality of care. In response to this, students studying Health Information Science, Public Health, Nursing, and Medicine should all gain a thorough understanding of EHR use at different levels for different purposes. The design of a usable and safe EHR system that accommodates the needs and workflows of different users, user groups, and disciplines is required for EHR learning to be efficient and effective. Objectives: This project builds several artifacts which seek to address both the educational and usability aspects of an educational EHR. The artifacts proposed are models for and examples of such an EHR with a configurable UI to be learned by students who need a background in EHR use during their degrees. Methods: Review literature and gather professional opinions from domain experts on usability, the use of workflow patterns, UI configurability and design, and the educational aspect of EHR use. Conduct interviews in a semi-casual virtual setting with open discussion in order to gain a deeper understanding of the principal aspects of EHR use in educational settings. Select a specific task and user group to illustrate how the proposed solution will function based on the current research. Develop three artifacts based on the available research, professional opinions, and prior knowledge of the topic. The artifacts capture the user task and user’s interactions with the EHR for learning. The first generic model provides a general understanding of the EHR system process. The second model is a specific example of performing the task of MRI ordering with a configurable UI. The third artifact includes UI mock-ups showcasing the models in a practical and visual way. Significance: Due to the lack of educational EHRs, medical professionals do not receive sufficient EHR training. Implementing an educational EHR with a usable and configurable interface to suit the needs of different user groups and disciplines will help facilitate EHR learning and training and ultimately improve the quality of patient care.Keywords: education, EHR, usability, configurable
Procedia PDF Downloads 15821473 The Characteristics of Transformation of Institutional Changes and Georgia
Authors: Nazira Kakulia
Abstract:
The analysis of transformation of institutional changes outlines two important characteristics. These are: the speed of the changes and their sequence. Successful transformation must be carried out in three different stages; On the first stage, macroeconomic stabilization must be achieved with the help of fiscal and monetary tools. Two-tier banking system should be established and the active functions of central bank should be replaced by the passive ones (reserve requirements and refinancing rate), together with the involvement growth of private sector. Fiscal policy by itself here means the creation of tax system which must replace previously existing direct state revenues; the share of subsidies in the state expenses must be reduced also. The second stage begins after reaching the macroeconomic stabilization at a time of change of formal institutes which must stimulate the private business. Corporate legislation creates a competitive environment at the market and the privatization of state companies takes place. Bankruptcy and contract law is created. he third stage is the most extended one, which means the formation of all state structures that is necessary for the further proper functioning of a market economy. These three stages about the cycle period of political and social transformation and the hierarchy of changes can also be grouped by the different methodology: on the first and the most short-term stage the transfer of power takes place. On the second stage institutions corresponding to new goal are created. The last phase of transformation is extended in time and it includes the infrastructural, socio-cultural and socio-structural changes. The main goal of this research is to explore and identify the features of such kind of models.Keywords: competitive environment, fiscal policy, macroeconomic stabilization, tax system
Procedia PDF Downloads 26721472 Assessing Effects of an Intervention on Bottle-Weaning and Reducing Daily Milk Intake from Bottles in Toddlers Using Two-Part Random Effects Models
Authors: Yungtai Lo
Abstract:
Two-part random effects models have been used to fit semi-continuous longitudinal data where the response variable has a point mass at 0 and a continuous right-skewed distribution for positive values. We review methods proposed in the literature for analyzing data with excess zeros. A two-part logit-log-normal random effects model, a two-part logit-truncated normal random effects model, a two-part logit-gamma random effects model, and a two-part logit-skew normal random effects model were used to examine effects of a bottle-weaning intervention on reducing bottle use and daily milk intake from bottles in toddlers aged 11 to 13 months in a randomized controlled trial. We show in all four two-part models that the intervention promoted bottle-weaning and reduced daily milk intake from bottles in toddlers drinking from a bottle. We also show that there are no differences in model fit using either the logit link function or the probit link function for modeling the probability of bottle-weaning in all four models. Furthermore, prediction accuracy of the logit or probit link function is not sensitive to the distribution assumption on daily milk intake from bottles in toddlers not off bottles.Keywords: two-part model, semi-continuous variable, truncated normal, gamma regression, skew normal, Pearson residual, receiver operating characteristic curve
Procedia PDF Downloads 35021471 Towards the Modeling of Lost Core Viability in High-Pressure Die Casting: A Fluid-Structure Interaction Model with 2-Phase Flow Fluid Model
Authors: Sebastian Kohlstädt, Michael Vynnycky, Stephan Goeke, Jan Jäckel, Andreas Gebauer-Teichmann
Abstract:
This paper summarizes the progress in the latest computational fluid dynamics research towards the modeling in of lost core viability in high-pressure die casting. High-pressure die casting is a process that is widely employed in the automotive and neighboring industries due to its advantages in casting quality and cost efficiency. The degrees of freedom are however somewhat limited as it has been so far difficult to use lost cores in the process. This is right now changing and the deployment of lost cores is considered a future growth potential for high-pressure die casting companies. The use of this technology itself is difficult though. The strength of the core material, as chiefly salt is used, is limited and experiments have shown that the cores will not hold under all circumstances and process designs. For this purpose, the publicly available CFD library foam-extend (OpenFOAM) is used, and two additional fluid models for incompressible and compressible two-phase flow are implemented as fluid solver models into the FSI library. For this purpose, the volume-of-fluid (VOF) methodology is used. The necessity for the fluid-structure interaction (FSI) approach is shown by a simple CFD model geometry. The model is benchmarked against analytical models and experimental data. Sufficient agreement is found with the analytical models and good agreement with the experimental data. An outlook on future developments concludes the paper.Keywords: CFD, fluid-structure interaction, high-pressure die casting, multiphase flow
Procedia PDF Downloads 33321470 A Deep Learning Approach to Real Time and Robust Vehicular Traffic Prediction
Authors: Bikis Muhammed, Sehra Sedigh Sarvestani, Ali R. Hurson, Lasanthi Gamage
Abstract:
Vehicular traffic events have overly complex spatial correlations and temporal interdependencies and are also influenced by environmental events such as weather conditions. To capture these spatial and temporal interdependencies and make more realistic vehicular traffic predictions, graph neural networks (GNN) based traffic prediction models have been extensively utilized due to their capability of capturing non-Euclidean spatial correlation very effectively. However, most of the already existing GNN-based traffic prediction models have some limitations during learning complex and dynamic spatial and temporal patterns due to the following missing factors. First, most GNN-based traffic prediction models have used static distance or sometimes haversine distance mechanisms between spatially separated traffic observations to estimate spatial correlation. Secondly, most GNN-based traffic prediction models have not incorporated environmental events that have a major impact on the normal traffic states. Finally, most of the GNN-based models did not use an attention mechanism to focus on only important traffic observations. The objective of this paper is to study and make real-time vehicular traffic predictions while incorporating the effect of weather conditions. To fill the previously mentioned gaps, our prediction model uses a real-time driving distance between sensors to build a distance matrix or spatial adjacency matrix and capture spatial correlation. In addition, our prediction model considers the effect of six types of weather conditions and has an attention mechanism in both spatial and temporal data aggregation. Our prediction model efficiently captures the spatial and temporal correlation between traffic events, and it relies on the graph attention network (GAT) and Bidirectional bidirectional long short-term memory (Bi-LSTM) plus attention layers and is called GAT-BILSTMA.Keywords: deep learning, real time prediction, GAT, Bi-LSTM, attention
Procedia PDF Downloads 7321469 Roadway Maintenance Management System
Authors: Chika Catherine Ayogu
Abstract:
Rehabilitation plays an important and integral part in the life of roadway rehabilitation management system. It is a systematic method for inspection and rating the roadway condition in a given area. The system performs a cost effective analysis of various maintenance and rehabilitation strategies. Finally the system prioritize and recommend roadway rehabilitation and maintenance to maximize results within a given budget amount. During execution of maintenance activity, the system also tracks labour, materials, equipment and cost for activities performed. The system implements physical assessment field inspection and rating of each street segment which is then entered into a database. The information is analyzed using a software, and provide recommendations and project future conditions. The roadway management system provides a deterioration curve for each segment based on input then assigns the most cost-effective maintenance strategy based on conditions, surface type and functional classification, and available budget. This paper investigates the roadway management system and its capabilities to assist in applying the right treatment to the right roadway at the right time so that expected service life of the roadway is extended as long as possible with acceptable cost.Keywords: effectiveness, rehabilitation, roadway, software system
Procedia PDF Downloads 15221468 Hybrid Wavelet-Adaptive Neuro-Fuzzy Inference System Model for a Greenhouse Energy Demand Prediction
Authors: Azzedine Hamza, Chouaib Chakour, Messaoud Ramdani
Abstract:
Energy demand prediction plays a crucial role in achieving next-generation power systems for agricultural greenhouses. As a result, high prediction quality is required for efficient smart grid management and therefore low-cost energy consumption. The aim of this paper is to investigate the effectiveness of a hybrid data-driven model in day-ahead energy demand prediction. The proposed model consists of Discrete Wavelet Transform (DWT), and Adaptive Neuro-Fuzzy Inference System (ANFIS). The DWT is employed to decompose the original signal in a set of subseries and then an ANFIS is used to generate the forecast for each subseries. The proposed hybrid method (DWT-ANFIS) was evaluated using a greenhouse energy demand data for a week and compared with ANFIS. The performances of the different models were evaluated by comparing the corresponding values of Mean Absolute Percentage Error (MAPE). It was demonstrated that discret wavelet transform can improve agricultural greenhouse energy demand modeling.Keywords: wavelet transform, ANFIS, energy consumption prediction, greenhouse
Procedia PDF Downloads 9021467 An Unified Model for Longshore Sediment Transport Rate Estimation
Authors: Aleksandra Dudkowska, Gabriela Gic-Grusza
Abstract:
Wind wave-induced sediment transport is an important multidimensional and multiscale dynamic process affecting coastal seabed changes and coastline evolution. The knowledge about sediment transport rate is important to solve many environmental and geotechnical issues. There are many types of sediment transport models but none of them is widely accepted. It is bacause the process is not fully defined. Another problem is a lack of sufficient measurment data to verify proposed hypothesis. There are different types of models for longshore sediment transport (LST, which is discussed in this work) and cross-shore transport which is related to different time and space scales of the processes. There are models describing bed-load transport (discussed in this work), suspended and total sediment transport. LST models use among the others the information about (i) the flow velocity near the bottom, which in case of wave-currents interaction in coastal zone is a separate problem (ii) critical bed shear stress that strongly depends on the type of sediment and complicates in the case of heterogeneous sediment. Moreover, LST rate is strongly dependant on the local environmental conditions. To organize existing knowledge a series of sediment transport models intercomparisons was carried out as a part of the project “Development of a predictive model of morphodynamic changes in the coastal zone”. Four classical one-grid-point models were studied and intercompared over wide range of bottom shear stress conditions, corresponding with wind-waves conditions appropriate for coastal zone in polish marine areas. The set of models comprises classical theories that assume simplified influence of turbulence on the sediment transport (Du Boys, Meyer-Peter & Muller, Ribberink, Engelund & Hansen). It turned out that the values of estimated longshore instantaneous mass sediment transport are in general in agreement with earlier studies and measurements conducted in the area of interest. However, none of the formulas really stands out from the rest as being particularly suitable for the test location over the whole analyzed flow velocity range. Therefore, based on the models discussed a new unified formula for longshore sediment transport rate estimation is introduced, which constitutes the main original result of this study. Sediment transport rate is calculated based on the bed shear stress and critical bed shear stress. The dependence of environmental conditions is expressed by one coefficient (in a form of constant or function) thus the model presented can be quite easily adjusted to the local conditions. The discussion of the importance of each model parameter for specific velocity ranges is carried out. Moreover, it is shown that the value of near-bottom flow velocity is the main determinant of longshore bed-load in storm conditions. Thus, the accuracy of the results depends less on the sediment transport model itself and more on the appropriate modeling of the near-bottom velocities.Keywords: bedload transport, longshore sediment transport, sediment transport models, coastal zone
Procedia PDF Downloads 38721466 A New Mathematical Model of Human Olfaction
Authors: H. Namazi, H. T. N. Kuan
Abstract:
It is known that in humans, the adaptation to a given odor occurs within a quite short span of time (typically one minute) after the odor is presented to the brain. Different models of human olfaction have been developed by scientists but none of these models consider the diffusion phenomenon in olfaction. A novel microscopic model of the human olfaction is presented in this paper. We develop this model by incorporating the transient diffusivity. In fact, the mathematical model is written based on diffusion of the odorant within the mucus layer. By the use of the model developed in this paper, it becomes possible to provide quantification of the objective strength of odor.Keywords: diffusion, microscopic model, mucus layer, olfaction
Procedia PDF Downloads 50721465 Energy Saving Potential of a Desiccant-Based Indirect-Direct Evaporative Cooling System
Authors: Amirreza Heidari, Akram Avami, Ehsan Heidari
Abstract:
Evaporative cooling systems are known as energy efficient cooling systems, with much lower electricity consumption than conventional vapor compression systems. A serious limitation of these systems, however, is that they are not applicable in humid regions. Combining a desiccant wheel with these systems, known as desiccant-based evaporative cooling systems, makes it possible to use evaporative cooling in humid climates. This paper evaluates the performane of a cooling system combining desiccant wheel, direct and indirect evaporative coolers (called desiccant-based indirect-direct evaporative cooling (DIDE) system) and then evaluates the energy saving potential of this system over the conventional vapor compression cooling and drying system. To illustrate the system ability of providing comfort conditions, a dynamic hourly simulation of this system is performed for a typical 60 m² building in Sydney, Australia. To evaluate the energy saving potential of this system, a conventional cooling and drying system is also simulated for the same cooling capacity. It has been found that the DIE system is able to provide comfort temperature and relative humidity in a subtropical humid climate like Sydney. The electricity and natural gas consumption of this system are respectively 39.2% and 2.6% lower than that of conventional system over a week. As the research has demonstrated, the innovative DIDE system is an energy efficient cooling system for subtropical humid regions.Keywords: desiccant, evaporative cooling, dehumidification, indirect evaporative cooler
Procedia PDF Downloads 15321464 Petri Net Modeling and Simulation of a Call-Taxi System
Authors: T. Godwin
Abstract:
A call-taxi system is a type of taxi service where a taxi could be requested through a phone call or mobile app. A schematic functioning of a call-taxi system is modeled using Petri net, which provides the necessary conditions for a taxi to be assigned by a dispatcher to pick a customer as well as the conditions for the taxi to be released by the customer. A Petri net is a graphical modeling tool used to understand sequences, concurrences, and confluences of activities in the working of discrete event systems. It uses tokens on a directed bipartite multi-graph to simulate the activities of a system. The Petri net model is translated into a simulation model and a call-taxi system is simulated. The simulation model helps in evaluating the operation of a call-taxi system based on the fleet size as well as the operating policies for call-taxi assignment and empty call-taxi repositioning. The developed Petri net based simulation model can be used to decide the fleet size as well as the call-taxi assignment policies for a call-taxi system.Keywords: call-taxi, discrete event system, petri net, simulation modeling
Procedia PDF Downloads 42621463 Bank ATM Monitoring System Using IR Sensor
Authors: P. Saravanakumar, N. Raja, M. Rameshkumar, D. Mohankumar, R. Sateeshkumar, B. Maheshwari
Abstract:
This research work is designed using Microsoft VB. Net as front end and MySQL as back end. The project deals with secure the user transaction in the ATM system. This application contains the option for sending the failed transaction details to the particular customer by using the SMS. When the customer withdraws the amount from the Bank ATM system, sometimes the amount will not be dispatched but the amount will be debited to the particular account. This application is used to avoid this type of problems in the ATM system. In this proposed system using IR technique to detect the dispatched amount. IR Transmitter and IR Receiver are placed in the path of cash dispatch. It is connected each other through the IR signal. When the customers withdraw the amount in the ATM system then the amount will be dispatched or not is monitored by IR Receiver. If the amount will be dispatched then the signal will be interrupted between the IR Receiver and the IR Transmitter. At that time, the monitoring system will be reduced their particular withdraw amount on their account. If the cash will not be dispatched, the signal will not be interrupted, at that time the particular withdraw amount will not be reduced their account. If the transaction completed successfully, the transaction details such as withdraw amount and current balance can be sent to the customer via the SMS. If the transaction fails, the transaction failed message can be send to the customer.Keywords: ATM system, monitoring system, IR Transmitter, IR Receiver
Procedia PDF Downloads 31021462 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images
Authors: Shenlun Chen, Leonard Wee
Abstract:
Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.Keywords: colorectal cancer, differentiation, survival analysis, tumor grading
Procedia PDF Downloads 13421461 Conventional and Hybrid Network Energy Systems Optimization for Canadian Community
Authors: Mohamed Ghorab
Abstract:
Local generated and distributed system for thermal and electrical energy is sighted in the near future to reduce transmission losses instead of the centralized system. Distributed Energy Resources (DER) is designed at different sizes (small and medium) and it is incorporated in energy distribution between the hubs. The energy generated from each technology at each hub should meet the local energy demands. Economic and environmental enhancement can be achieved when there are interaction and energy exchange between the hubs. Network energy system and CO2 optimization between different six hubs presented Canadian community level are investigated in this study. Three different scenarios of technology systems are studied to meet both thermal and electrical demand loads for the six hubs. The conventional system is used as the first technology system and a reference case study. The conventional system includes boiler to provide the thermal energy, but the electrical energy is imported from the utility grid. The second technology system includes combined heat and power (CHP) system to meet the thermal demand loads and part of the electrical demand load. The third scenario has integration systems of CHP and Organic Rankine Cycle (ORC) where the thermal waste energy from the CHP system is used by ORC to generate electricity. General Algebraic Modeling System (GAMS) is used to model DER system optimization based on energy economics and CO2 emission analyses. The results are compared with the conventional energy system. The results show that scenarios 2 and 3 provide an annual total cost saving of 21.3% and 32.3 %, respectively compared to the conventional system (scenario 1). Additionally, Scenario 3 (CHP & ORC systems) provides 32.5% saving in CO2 emission compared to conventional system subsequent case 2 (CHP system) with a value of 9.3%.Keywords: distributed energy resources, network energy system, optimization, microgeneration system
Procedia PDF Downloads 19221460 Analytical and Numerical Results for Free Vibration of Laminated Composites Plates
Authors: Mohamed Amine Ben Henni, Taher Hassaine Daouadji, Boussad Abbes, Yu Ming Li, Fazilay Abbes
Abstract:
The reinforcement and repair of concrete structures by bonding composite materials have become relatively common operations. Different types of composite materials can be used: carbon fiber reinforced polymer (CFRP), glass fiber reinforced polymer (GFRP) as well as functionally graded material (FGM). The development of analytical and numerical models describing the mechanical behavior of structures in civil engineering reinforced by composite materials is necessary. These models will enable engineers to select, design, and size adequate reinforcements for the various types of damaged structures. This study focuses on the free vibration behavior of orthotropic laminated composite plates using a refined shear deformation theory. In these models, the distribution of transverse shear stresses is considered as parabolic satisfying the zero-shear stress condition on the top and bottom surfaces of the plates without using shear correction factors. In this analysis, the equation of motion for simply supported thick laminated rectangular plates is obtained by using the Hamilton’s principle. The accuracy of the developed model is demonstrated by comparing our results with solutions derived from other higher order models and with data found in the literature. Besides, a finite-element analysis is used to calculate the natural frequencies of laminated composite plates and is compared with those obtained by the analytical approach.Keywords: composites materials, laminated composite plate, finite-element analysis, free vibration
Procedia PDF Downloads 29121459 Image Captioning with Vision-Language Models
Authors: Promise Ekpo Osaine, Daniel Melesse
Abstract:
Image captioning is an active area of research in the multi-modal artificial intelligence (AI) community as it connects vision and language understanding, especially in settings where it is required that a model understands the content shown in an image and generates semantically and grammatically correct descriptions. In this project, we followed a standard approach to a deep learning-based image captioning model, injecting architecture for the encoder-decoder setup, where the encoder extracts image features, and the decoder generates a sequence of words that represents the image content. As such, we investigated image encoders, which are ResNet101, InceptionResNetV2, EfficientNetB7, EfficientNetV2M, and CLIP. As a caption generation structure, we explored long short-term memory (LSTM). The CLIP-LSTM model demonstrated superior performance compared to the encoder-decoder models, achieving a BLEU-1 score of 0.904 and a BLEU-4 score of 0.640. Additionally, among the CNN-LSTM models, EfficientNetV2M-LSTM exhibited the highest performance with a BLEU-1 score of 0.896 and a BLEU-4 score of 0.586 while using a single-layer LSTM.Keywords: multi-modal AI systems, image captioning, encoder, decoder, BLUE score
Procedia PDF Downloads 7821458 Reliability Analysis: A Case Study in Designing Power Distribution System of Tehran Oil Refinery
Authors: A. B. Arani, R. Shojaee
Abstract:
Electrical power distribution system is one of the vital infrastructures of an oil refinery, which requires wide area of study and planning before construction. In this paper, power distribution reliability of Tehran Refinery’s KHDS/GHDS unit has been taken into consideration to investigate the importance of these kinds of studies and evaluate the designed system. In this regard, the authors chose and evaluated different configurations of electrical power distribution along with the existing configuration with the aim of finding the most suited configuration which satisfies the conditions of minimum cost of electrical system construction, minimum cost imposed by loss of load, and maximum power system reliability.Keywords: power distribution system, oil refinery, reliability, investment cost, interruption cost
Procedia PDF Downloads 87621457 Optimized Text Summarization Model on Mobile Screens for Sight-Interpreters: An Empirical Study
Authors: Jianhua Wang
Abstract:
To obtain key information quickly from long texts on small screens of mobile devices, sight-interpreters need to establish optimized summarization model for fast information retrieval. Four summarization models based on previous studies were studied including title+key words (TKW), title+topic sentences (TTS), key words+topic sentences (KWTS) and title+key words+topic sentences (TKWTS). Psychological experiments were conducted on the four models for three different genres of interpreting texts to establish the optimized summarization model for sight-interpreters. This empirical study shows that the optimized summarization model for sight-interpreters to quickly grasp the key information of the texts they interpret is title+key words (TKW) for cultural texts, title+key words+topic sentences (TKWTS) for economic texts and topic sentences+key words (TSKW) for political texts.Keywords: different genres, mobile screens, optimized summarization models, sight-interpreters
Procedia PDF Downloads 31621456 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 18921455 Net Fee and Commission Income Determinants of European Cooperative Banks
Authors: Karolína Vozková, Matěj Kuc
Abstract:
Net fee and commission income is one of the key elements of a bank’s core income. In the current low-interest rate environment, this type of income is gaining importance relative to net interest income. This paper analyses the effects of bank and country specific determinants of net fee and commission income on a set of cooperative banks from European countries in the 2007-2014 period. In order to do that, dynamic panel data methods (system Generalized Methods of Moments) were employed. Subsequently, alternative panel data methods were run as robustness checks of the analysis. Strong positive impact of bank concentration on the share of net fee and commission income was found, which proves that cooperative banks tend to display a higher share of fee income in less competitive markets. This is probably connected with the fact that they stick with their traditional deposit-taking and loan-providing model and fees on these services are driven down by the competitors. Moreover, compared to commercial banks, cooperatives do not expand heavily into non-traditional fee bearing services under competition and their overall fee income share is therefore decreasing with the increased competitiveness of the sector.Keywords: cooperative banking, dynamic panel data models, net fee and commission income, system GMM
Procedia PDF Downloads 33221454 Model Observability – A Monitoring Solution for Machine Learning Models
Authors: Amreth Chandrasehar
Abstract:
Machine Learning (ML) Models are developed and run in production to solve various use cases that help organizations to be more efficient and help drive the business. But this comes at a massive development cost and lost business opportunities. According to the Gartner report, 85% of data science projects fail, and one of the factors impacting this is not paying attention to Model Observability. Model Observability helps the developers and operators to pinpoint the model performance issues data drift and help identify root cause of issues. This paper focuses on providing insights into incorporating model observability in model development and operationalizing it in production.Keywords: model observability, monitoring, drift detection, ML observability platform
Procedia PDF Downloads 11221453 PSS and SVC Controller Design by BFA to Enhance the Power System Stability
Authors: Saeid Jalilzadeh
Abstract:
Designing of PSS and SVC controller based on Bacterial Foraging Algorithm (BFA) to improve the stability of power system is proposed in this paper. Same controllers for PSS and SVC has been considered and Single machine infinite bus (SMIB) system with SVC located at the terminal of generator is used to evaluate the proposed controllers. BFA is used to optimize the coefficients of the controllers. Finally simulation for a special disturbance as an input power of generator with the proposed controllers in order to investigate the dynamic behavior of generator is done. The simulation results demonstrate that the system composed with optimized controllers has an outstanding operation in fast damping of oscillations of power system.Keywords: PSS, SVC, SMIB, optimize controller
Procedia PDF Downloads 45721452 An Application of Sinc Function to Approximate Quadrature Integrals in Generalized Linear Mixed Models
Authors: Altaf H. Khan, Frank Stenger, Mohammed A. Hussein, Reaz A. Chaudhuri, Sameera Asif
Abstract:
This paper discusses a novel approach to approximate quadrature integrals that arise in the estimation of likelihood parameters for the generalized linear mixed models (GLMM) as well as Bayesian methodology also requires computation of multidimensional integrals with respect to the posterior distributions in which computation are not only tedious and cumbersome rather in some situations impossible to find solutions because of singularities, irregular domains, etc. An attempt has been made in this work to apply Sinc function based quadrature rules to approximate intractable integrals, as there are several advantages of using Sinc based methods, for example: order of convergence is exponential, works very well in the neighborhood of singularities, in general quite stable and provide high accurate and double precisions estimates. The Sinc function based approach seems to be utilized first time in statistical domain to our knowledge, and it's viability and future scopes have been discussed to apply in the estimation of parameters for GLMM models as well as some other statistical areas.Keywords: generalized linear mixed model, likelihood parameters, qudarature, Sinc function
Procedia PDF Downloads 39621451 Influence of Existing Foundations on Soil-Structure Interaction of New Foundations in a Reconstruction Project
Authors: Kanagarajah Ravishankar
Abstract:
This paper describes a study performed for a project featuring an elevated steel bridge structure supported by various types of foundation systems. This project focused on rehabilitation or redesign of a portion of the bridge substructures founded on caisson foundations. The study that this paper focuses on is the evaluation of foundation and soil stiffnesses and interactions between the existing caissons and proposed foundations. The caisson foundations were founded on top of rock, where the depth to the top of rock varies from approximately 50 to 140 feet below ground surface. Based on a comprehensive investigation of the existing piers and caissons, the presence of ASR was suspected from observed whitish deposits on cracked surfaces as well as internal damages sustained through the entire depth of foundation structures. Reuse of existing piers and caissons was precluded and deemed unsuitable under the earthquake condition because of these defects on the structures. The proposed design of new foundations and substructures which was selected ultimately neglected the contribution from the existing caisson and pier columns. Due to the complicated configuration between the existing caisson and the proposed foundation system, three-dimensional finite element method (FEM) was employed to evaluate soil-structure interaction (SSI), to evaluate the effect of the existing caissons on the proposed foundations, and to compare the results with conventional group analysis. The FEM models include separate models for existing caissons, proposed foundations, and combining both.Keywords: soil-structure interaction, foundation stiffness, finite element, seismic design
Procedia PDF Downloads 14021450 Design, Synthesis and Anti-Inflammatory Activity of Some Coumarin and Flavone Derivatives Containing 1,4 Dioxane Ring System
Authors: Asif Husain, Shah Alam Khan
Abstract:
Coumarins and flavones are oxygen containing heterocyclic compounds which are present in various biologically active compounds. Both the heterocyclic rings are associated with diverse biological actions, therefore considered as an important scaffold for the design of molecules of pharmaceutical interest. Aim: To synthesize and evaluate the in vivo anti-inflammatory activity of few coumrain and flavone derivatives containing 1,4 dioxane ring system. Materials and methods: Coumarin derivatives (3a-d) were synthesized by reacting 7,8 dihydroxy coumarin (2a) and its 4-methyl derivative (2b) with epichlorohydrin/ethylene dibromide. The flavone derivatives (10a-d) were prepared by using quercetin and 3,4 dihydroxy flavones. Compounds of both the series were also evaluated for their anti-inflammatory, analgesic activity and ulcerogenicity in animal models by reported methods. Results and Discussion: The structures of all newly synthesized compounds were confirmed with the help of IR, 1H NMR, 13C NMR and Mass spectral studies. Elemental analyses data for each element analyzed (C, H, N) was found to be within acceptable range of ±0.4 %. Flavone derivatives, but in particular quercetin containing 1,4 dioxane ring system (10d) showed better anti-inflammatory and analgesic activity along with reduced gastrointestinal toxicity as compared to other synthesized compounds. The results of anti-inflammatory and analgesic activities of both the series are comparable with the positive control, diclofenac. Conclusion: Compound 10d, a quercetin derivative, emerged as a lead molecule which exhibited potent anti-inflammatory and analgesic activity with significant reduced gastric toxicity.Keywords: analgesic, anti-inflammatory, 1, 4 dioxane, coumarin, flavone
Procedia PDF Downloads 32921449 Time and Cost Efficiency Analysis of Quick Die Change System on Metal Stamping Industry
Authors: Rudi Kurniawan Arief
Abstract:
Manufacturing cost and setup time are the hot topics to improve in Metal Stamping industry because material and components price are always rising up while costumer requires to cut down the component price year by year. The Single Minute Exchange of Die (SMED) is one of many methods to reduce waste in stamping industry. The Japanese Quick Die Change (QDC) dies system is one of SMED systems that could reduce both of setup time and manufacturing cost. However, this system is rarely used in stamping industries. This paper will analyze how deep the QDC dies system could reduce setup time and the manufacturing cost. The research is conducted by direct observation, simulating and comparing of QDC dies system with conventional dies system. In this research, we found that the QDC dies system could save up to 35% of manufacturing cost and reduce 70% of setup times. This simulation proved that the QDC die system is effective for cost reduction but must be applied in several parallel production processes.Keywords: press die, metal stamping, QDC system, single minute exchange die, manufacturing cost saving, SMED
Procedia PDF Downloads 17121448 Energy Consumption Estimation for Hybrid Marine Power Systems: Comparing Modeling Methodologies
Authors: Kamyar Maleki Bagherabadi, Torstein Aarseth Bø, Truls Flatberg, Olve Mo
Abstract:
Hydrogen fuel cells and batteries are one of the promising solutions aligned with carbon emission reduction goals for the marine sector. However, the higher installation and operation costs of hydrogen-based systems compared to conventional diesel gensets raise questions about the appropriate hydrogen tank size, energy, and fuel consumption estimations. Ship designers need methodologies and tools to calculate energy and fuel consumption for different component sizes to facilitate decision-making regarding feasibility and performance for retrofits and design cases. The aim of this work is to compare three alternative modeling approaches for the estimation of energy and fuel consumption with various hydrogen tank sizes, battery capacities, and load-sharing strategies. A fishery vessel is selected as an example, using logged load demand data over a year of operations. The modeled power system consists of a PEM fuel cell, a diesel genset, and a battery. The methodologies used are: first, an energy-based model; second, considering load variations during the time domain with a rule-based Power Management System (PMS); and third, a load variations model and dynamic PMS strategy based on optimization with perfect foresight. The errors and potentials of the methods are discussed, and design sensitivity studies for this case are conducted. The results show that the energy-based method can estimate fuel and energy consumption with acceptable accuracy. However, models that consider time variation of the load provide more realistic estimations of energy and fuel consumption regarding hydrogen tank and battery size, still within low computational time.Keywords: fuel cell, battery, hydrogen, hybrid power system, power management system
Procedia PDF Downloads 4221447 Co-payment Strategies for Chronic Medications: A Qualitative and Comparative Analysis at European Level
Authors: Pedro M. Abreu, Bruno R. Mendes
Abstract:
The management of pharmacotherapy and the process of dispensing medicines is becoming critical in clinical pharmacy due to the increase of incidence and prevalence of chronic diseases, the complexity and customization of therapeutic regimens, the introduction of innovative and more expensive medicines, the unbalanced relation between expenditure and revenue as well as due to the lack of rationalization associated with medication use. For these reasons, co-payments emerged in Europe in the 70s and have been applied over the past few years in healthcare. Co-payments lead to a rationing and rationalization of user’s access under healthcare services and products, and simultaneously, to a qualification and improvement of the services and products for the end-user. This analysis, under hospital practices particularly and co-payment strategies in general, was carried out on all the European regions and identified four reference countries, that apply repeatedly this tool and with different approaches. The structure, content and adaptation of European co-payments were analyzed through 7 qualitative attributes and 19 performance indicators, and the results expressed in a scorecard, allowing to conclude that the German models (total score of 68,2% and 63,6% in both elected co-payments) can collect more compliance and effectiveness, the English models (total score of 50%) can be more accessible, and the French models (total score of 50%) can be more adequate to the socio-economic and legal framework. Other European models did not show the same quality and/or performance, so were not taken as a standard in the future design of co-payments strategies. In this sense, we can see in the co-payments a strategy not only to moderate the consumption of healthcare products and services, but especially to improve them, as well as a strategy to increment the value that the end-user assigns to these services and products, such as medicines.Keywords: clinical pharmacy, co-payments, healthcare, medicines
Procedia PDF Downloads 25121446 Fuzzy-Machine Learning Models for the Prediction of Fire Outbreak: A Comparative Analysis
Authors: Uduak Umoh, Imo Eyoh, Emmauel Nyoho
Abstract:
This paper compares fuzzy-machine learning algorithms such as Support Vector Machine (SVM), and K-Nearest Neighbor (KNN) for the predicting cases of fire outbreak. The paper uses the fire outbreak dataset with three features (Temperature, Smoke, and Flame). The data is pre-processed using Interval Type-2 Fuzzy Logic (IT2FL) algorithm. Min-Max Normalization and Principal Component Analysis (PCA) are used to predict feature labels in the dataset, normalize the dataset, and select relevant features respectively. The output of the pre-processing is a dataset with two principal components (PC1 and PC2). The pre-processed dataset is then used in the training of the aforementioned machine learning models. K-fold (with K=10) cross-validation method is used to evaluate the performance of the models using the matrices – ROC (Receiver Operating Curve), Specificity, and Sensitivity. The model is also tested with 20% of the dataset. The validation result shows KNN is the better model for fire outbreak detection with an ROC value of 0.99878, followed by SVM with an ROC value of 0.99753.Keywords: Machine Learning Algorithms , Interval Type-2 Fuzzy Logic, Fire Outbreak, Support Vector Machine, K-Nearest Neighbour, Principal Component Analysis
Procedia PDF Downloads 185