Search results for: time series prediction
19551 Solution Approaches for Some Scheduling Problems with Learning Effect and Job Dependent Delivery Times
Authors: M. Duran Toksari, Berrin Ucarkus
Abstract:
In this paper, we propose two algorithms to optimally solve makespan and total completion time scheduling problems with learning effect and job dependent delivery times in a single machine environment. The delivery time is the extra time to eliminate adverse effect between the main processing and delivery to the customer. In this paper, we introduce the job dependent delivery times for some single machine scheduling problems with position dependent learning effect, which are makespan are total completion. The results with respect to two algorithms proposed for solving of the each problem are compared with LINGO solutions for 50-jobs, 100-jobs and 150-jobs problems. The proposed algorithms can find the same results in shorter time.Keywords: delivery Times, learning effect, makespan, scheduling, total completion time
Procedia PDF Downloads 47319550 IT-Aided Business Process Enabling Real-Time Analysis of Candidates for Clinical Trials
Authors: Matthieu-P. Schapranow
Abstract:
Recruitment of participants for clinical trials requires the screening of a big number of potential candidates, i.e. the testing for trial-specific inclusion and exclusion criteria, which is a time-consuming and complex task. Today, a significant amount of time is spent on identification of adequate trial participants as their selection may affect the overall study results. We introduce a unique patient eligibility metric, which allows systematic ranking and classification of candidates based on trial-specific filter criteria. Our web application enables real-time analysis of patient data and assessment of candidates using freely definable inclusion and exclusion criteria. As a result, the overall time required for identifying eligible candidates is tremendously reduced whilst additional degrees of freedom for evaluating the relevance of individual candidates are introduced by our contribution.Keywords: in-memory technology, clinical trials, screening, eligibility metric, data analysis, clustering
Procedia PDF Downloads 49619549 In vivo Antiplatelet Activity Test of Wet Extract of Mimusops elengi L.'s Leaves on DDY Strain Mice as an Effort to Treat Atherosclerosis
Authors: Dewi Tristantini, Jason Jonathan
Abstract:
Coronary Artery Disease (CAD) is one of the deathliest diseases which is caused by atherosclerosis. Atherosclerosis is a disease that plaque builds up inside the arteries. Plaque is made up of fat, cholesterol, calcium, platelet, and other substances found in blood. The current treatment of atherosclerosis is to provide antiplatelet therapy treatment, but such treatments often cause gastrointestinal irritation, muscle pain and hormonal imbalance. Mimusops elengi L.’s leaves can be utilized as a natural and cheap antiplatelet’s source because it contains flavonoids such as quertecin. Antiplatelet aggregation effect of Mimusops elengi L.’s leaves’ wet extract was measured by bleeding time on DDY strain mice with the test substances were given orally during the period of 8 days. The bleeding time was measured on first day and 9th day. Empirically, the dose which is used for humans is 8.5 g of leaves in 600 ml of water. This dose is equivalent to 2.1 g of leaves in 350 ml of water for mice. The extract was divided into 3 doses for mice: 0.05 ml/day; 0.1 ml/day; 0.2 ml/day. After getting the percentage of the increase in bleeding time, data were analyzed by analysis of variance test (Anova), followed by individual comparison within the groups by LSD test. The test substances above respectively increased bleeding time 21%, 62%, and 128%. As the conclusion, the 0.02 ml/day dose of Mimusops elengi L.’s leaves’ wet extract could increase bleeding time better than clopidogrel as positive controls with 110% increase in bleeding time.Keywords: antiplatelets, atheroschlerosis, bleeding time, Mimusops elengi
Procedia PDF Downloads 27019548 Oil-price Volatility and Economic Prosperity in Nigeria: Empirical Evidence
Authors: Yohanna Panshak
Abstract:
The impact of macroeconomic instability on economic growth and prosperity has been at forefront in many discourses among researchers and policy makers and has generated a lot of controversies over the years. This has generated series of research efforts towards understanding the remote causes of this phenomenon; its nature, determinants and how it can be targeted and mitigated. While others have opined that the root cause of macroeconomic flux in Nigeria is attributed to Oil-Price volatility, others viewed the issue as resulting from some constellation of structural constraints both within and outside the shores of the country. Research works of scholars such as [Akpan (2009), Aliyu (2009), Olomola (2006), etc] argue that oil volatility can determine economic growth or has the potential of doing so. On the contrary, [Darby (1982), Cerralo (2005) etc] share the opinion that it can slow down growth. The earlier argument rest on the understanding that for a net balance of oil exporting economies, price upbeat directly increases real national income through higher export earnings, whereas, the latter allude to the case of net-oil importing countries (which experience price rises, increased input costs, reduced non-oil demand, low investment, fall in tax revenues and ultimately an increase in budget deficit which will further reduce welfare level). Therefore, assessing the precise impact of oil price volatility on virtually any economy is a function of whether it is an oil-exporting or importing nation. Research on oil price volatility and its outcome on the growth of the Nigerian economy are evolving and in a march towards resolving Nigeria’s macroeconomic instability as long as oil revenue still remain the mainstay and driver of socio-economic engineering. Recently, a major importer of Nigeria’s oil- United States made a historic breakthrough in more efficient source of energy for her economy with the capacity of serving significant part of the world. This undoubtedly suggests a threat to the exchange earnings of the country. The need to understand fluctuation in its major export commodity is critical. This paper leans on the Renaissance growth theory with greater focus on theoretical work of Lee (1998); a leading proponent of this school who makes a clear cut of difference between oil price changes and oil price volatility. Based on the above background, the research seeks to empirically examine the impact oil-price volatility on government expenditure using quarterly time series data spanning 1986:1 to 2014:4. Vector Auto Regression (VAR) econometric approach shall be used. The structural properties of the model shall be tested using Augmented Dickey-Fuller and Phillips-Perron. Relevant diagnostics tests of heteroscedasticity, serial correlation and normality shall also be carried out. Policy recommendation shall be offered on the empirical findings and believes it assist policy makers not only in Nigeria but the world-over.Keywords: oil-price, volatility, prosperity, budget, expenditure
Procedia PDF Downloads 27419547 Interpretable Deep Learning Models for Medical Condition Identification
Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji
Abstract:
Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.Keywords: deep learning, interpretability, attention, big data, medical conditions
Procedia PDF Downloads 9519546 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth in Patients with Lymph Nodes Metastases
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This paper is devoted to mathematical modelling of the progression and stages of breast cancer. We propose Consolidated mathematical growth model of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases (CoM-III) as a new research tool. We are interested in: 1) modelling the whole natural history of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; 2) developing adequate and precise CoM-III which reflects relations between primary tumor and secondary distant metastases; 3) analyzing the CoM-III scope of application; 4) implementing the model as a software tool. Firstly, the CoM-III includes exponential tumor growth model as a system of determinate nonlinear and linear equations. Secondly, mathematical model corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for secondary distant metastases growth in patients with lymph nodes metastases; 3) ‘visible period’ for secondary distant metastases growth in patients with lymph nodes metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-III model and predictive software: a) detect different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; b) make forecast of the period of the distant metastases appearance in patients with lymph nodes metastases; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoM-III: the number of doublings for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases. The CoM-III enables, for the first time, to predict the whole natural history of primary tumor and secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-III describes correctly primary tumor and secondary distant metastases growth of IA, IIA, IIB, IIIB (T1-4N1-3M0) stages in patients with lymph nodes metastases (N1-3); b) facilitates the understanding of the appearance period and inception of secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, primary tumor, secondary metastases, survival
Procedia PDF Downloads 30819545 Modelling and Investigation of Phase Change Phenomena of Multiple Water Droplets
Authors: K. R. Sultana, K. Pope, Y. S. Muzychka
Abstract:
In recent years, the research of heat transfer or phase change phenomena of liquid water droplets experiences a growing interest in aircraft icing, power transmission line icing, marine icing and wind turbine icing applications. This growing interest speeding up the research from single to multiple droplet phenomena. Impingements of multiple droplets and the resulting solidification phenomena after impact on a very cold surface is computationally studied in this paper. The model used in the current study solves the flow equation, composed of energy balance and the volume fraction equations. The main aim of the study is to investigate the effects of several thermo-physical properties (density, thermal conductivity and specific heat) on droplets freezing. The outcome is examined by various important factors, for instance, liquid fraction, total freezing time, droplet temperature and total heat transfer rate in the interface region. The liquid fraction helps to understand the complete phase change phenomena during solidification. Temperature distribution and heat transfer rate help to demonstrate the overall thermal exchange behaviors between the droplets and substrate surface. Findings of this research provide an important technical achievement for ice modeling and prediction studies.Keywords: droplets, CFD, thermos-physical properties, solidification
Procedia PDF Downloads 24619544 Investigating the Factors Affecting on One Time Passwords Technology Acceptance: A Case Study in Banking Environment
Authors: Sajad Shokohuyar, Mahsa Zomorrodi Anbaji, Saghar Pouyan Shad
Abstract:
According to fast technology growth, modern banking tries to decrease going to banks’ branches and increase customers’ consent. One of the problems which banks face is securing customer’s password. The banks’ solution is one time password creation system. In this research by adapting from acceptance of technology model theory, assesses factors that are effective on banking in Iran especially in using one time password machine by one of the private banks of Iran customers. The statistical population is all of this bank’s customers who use electronic banking service and one time password technology and the questionnaires were distributed among members of statistical population in 5 selected groups of north, south, center, east and west of Tehran. Findings show that confidential preservation, education, ease of utilization and advertising and informing has positive relations and distinct hardware and age has negative relations.Keywords: security, electronic banking, one time password, information technology
Procedia PDF Downloads 46019543 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities
Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun
Abstract:
As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning
Procedia PDF Downloads 6219542 Integrating Time-Series and High-Spatial Remote Sensing Data Based on Multilevel Decision Fusion
Authors: Xudong Guan, Ainong Li, Gaohuan Liu, Chong Huang, Wei Zhao
Abstract:
Due to the low spatial resolution of MODIS data, the accuracy of small-area plaque extraction with a high degree of landscape fragmentation is greatly limited. To this end, the study combines Landsat data with higher spatial resolution and MODIS data with higher temporal resolution for decision-level fusion. Considering the importance of the land heterogeneity factor in the fusion process, it is superimposed with the weighting factor, which is to linearly weight the Landsat classification result and the MOIDS classification result. Three levels were used to complete the process of data fusion, that is the pixel of MODIS data, the pixel of Landsat data, and objects level that connect between these two levels. The multilevel decision fusion scheme was tested in two sites of the lower Mekong basin. We put forth a comparison test, and it was proved that the classification accuracy was improved compared with the single data source classification results in terms of the overall accuracy. The method was also compared with the two-level combination results and a weighted sum decision rule-based approach. The decision fusion scheme is extensible to other multi-resolution data decision fusion applications.Keywords: image classification, decision fusion, multi-temporal, remote sensing
Procedia PDF Downloads 12719541 Alphabet Recognition Using Pixel Probability Distribution
Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay
Abstract:
Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix
Procedia PDF Downloads 39219540 Asymmetric of the Segregation-Enhanced Brazil Nut Effect
Authors: Panupat Chaiworn, Soraya lama
Abstract:
We study the motion of particles in cylinders which are subjected to a sinusoidal vertical vibration. We measure the rising time of a large intruder from the bottom of the container to free surface of the bed particles and find that the rising time as a function of intruder density increases to a maximum and then decreases monotonically. The result is qualitatively accord to the previous findings in experiments using relative humidity of the bed particles and found speed convection of the bed particles containers it moving slowly, and the rising time of the intruder where a minimal instead of maximal rising time in the small density region was found. Our experimental results suggest that the topology of the container plays an important role in the Brazil nut effect.Keywords: granular particles, Brazil nut effect, cylinder container, vertical vibration, convection
Procedia PDF Downloads 53419539 Influence of Deposition Temperature on Supercapacitive Properties of Reduced Graphene Oxide on Carbon Cloth: New Generation of Wearable Energy Storage Electrode Material
Authors: Snehal L. Kadam, Shriniwas B. Kulkarni
Abstract:
Flexible electrode material with high surface area and good electrochemical properties is the current trend captivating the researchers across globe for application in the next generation energy storage field. In the present work, crumpled sheet like reduced graphene oxide grown on carbon cloth by the hydrothermal method with a series of different deposition temperatures at fixed time. The influence of the deposition temperature on the structural, morphological, optical and supercapacitive properties of the electrode material was investigated by XRD, RAMAN, XPS, TEM, FE-SEM, UV-VISIBLE and electrochemical characterization techniques.The results show that the hydrothermally synthesized reduced graphene oxide on carbon cloth has sheet like mesoporous structure. The reduced graphene oxide material at 160°C exhibits the best supercapacitor performance, with a specific capacitance of 443 F/g at scan rate 5mV/sec. Moreover, stability studies show 97% capacitance retention over 1000 CV cycles. This result shows that hydrothermally synthesized RGO on carbon cloth is the potential electrode material and would be used in the next-generation wearable energy storage systems. The detailed analysis and results will be presented at the conference.Keywords: graphene oxide, reduced graphene oxide, carbon cloth, deposition temperature, supercapacitor
Procedia PDF Downloads 19719538 Agegraphic Dark Energy with GUP
Authors: H. R. Fazlollahi
Abstract:
Dark Energy origin is unknown and so describing this mysterious component in large scale structure needs to manipulate our theories in general relativity. Although in most models, dark energy arises from extra terms through modifying Einstein-Hilbert action, maybe its origin traces back to fundamental aspects of ground energy of space-time given in quantum mechanics. Hence, diluting space-time in general relativity with quantum mechanics properties leads to the Karolyhazy relation corresponding energy density of quantum fluctuations of space-time. Through generalized uncertainty principle and an eye to Karolyhazy approach in this study we extend energy density of quantum fluctuations of space-time. Also, the application of this idea is considered in late time evolution and we have shown how extra term in generalized uncertainty principle plays as a plausible interaction term role in suggested model.Keywords: generalized uncertainty principle, karolyhazy approach, agegraphic dark energy, cosmology
Procedia PDF Downloads 7719537 LiDAR Based Real Time Multiple Vehicle Detection and Tracking
Authors: Zhongzhen Luo, Saeid Habibi, Martin v. Mohrenschildt
Abstract:
Self-driving vehicle require a high level of situational awareness in order to maneuver safely when driving in real world condition. This paper presents a LiDAR based real time perception system that is able to process sensor raw data for multiple target detection and tracking in dynamic environment. The proposed algorithm is nonparametric and deterministic that is no assumptions and priori knowledge are needed from the input data and no initializations are required. Additionally, the proposed method is working on the three-dimensional data directly generated by LiDAR while not scarifying the rich information contained in the domain of 3D. Moreover, a fast and efficient for real time clustering algorithm is applied based on a radially bounded nearest neighbor (RBNN). Hungarian algorithm procedure and adaptive Kalman filtering are used for data association and tracking algorithm. The proposed algorithm is able to run in real time with average run time of 70ms per frame.Keywords: lidar, segmentation, clustering, tracking
Procedia PDF Downloads 42919536 Effect of Common Yoga Protocol on Reaction Time of Football Players
Authors: Vikram Singh
Abstract:
The objective of the study was to study the effectiveness of common yoga protocol on reaction time (simple visual reaction time-SVRT measured in milliseconds/seconds) of male football players in the age group of 15 to 21 years. The 40 boys were randomly assigned into two groups i.e. control and experimental. SVRT for both the groups were measured on day-1 and post intervention (common yoga protocol here) was measured after 45 days of training to the experimental group only. One way ANOVA (Univariate analysis) and Independent t-test using SPSS 23 statistical package was applied to get and analyze the results. There was a significant difference after 45 days of yoga protocol in simple visual reaction time of experimental group (p = .032), t (33.05) = 3.881, p = .000 (two-tailed). Null hypothesis (that there would be no post measurement differences in reaction times of control and experimental groups) was rejected. Where p<.05. Therefore alternate hypothesis was accepted.Keywords: footballers, t-test, yoga protocol, reaction time
Procedia PDF Downloads 25719535 Bidirectional Dynamic Time Warping Algorithm for the Recognition of Isolated Words Impacted by Transient Noise Pulses
Authors: G. Tamulevičius, A. Serackis, T. Sledevič, D. Navakauskas
Abstract:
We consider the biggest challenge in speech recognition – noise reduction. Traditionally detected transient noise pulses are removed with the corrupted speech using pulse models. In this paper we propose to cope with the problem directly in Dynamic Time Warping domain. Bidirectional Dynamic Time Warping algorithm for the recognition of isolated words impacted by transient noise pulses is proposed. It uses simple transient noise pulse detector, employs bidirectional computation of dynamic time warping and directly manipulates with warping results. Experimental investigation with several alternative solutions confirms effectiveness of the proposed algorithm in the reduction of impact of noise on recognition process – 3.9% increase of the noisy speech recognition is achieved.Keywords: transient noise pulses, noise reduction, dynamic time warping, speech recognition
Procedia PDF Downloads 56219534 An Improved Approach to Solve Two-Level Hierarchical Time Minimization Transportation Problem
Authors: Kalpana Dahiya
Abstract:
This paper discusses a two-level hierarchical time minimization transportation problem, which is an important class of transportation problems arising in industries. This problem has been studied by various researchers, and a number of polynomial time iterative algorithms are available to find its solution. All the existing algorithms, though efficient, have some shortcomings. The current study proposes an alternate solution algorithm for the problem that is more efficient in terms of computational time than the existing algorithms. The results justifying the underlying theory of the proposed algorithm are given. Further, a detailed comparison of the computational behaviour of all the algorithms for randomly generated instances of this problem of different sizes validates the efficiency of the proposed algorithm.Keywords: global optimization, hierarchical optimization, transportation problem, concave minimization
Procedia PDF Downloads 16619533 Design of Multi-Loop Controller for Minimization of Energy Consumption in the Distillation Column
Authors: Vinayambika S. Bhat, S. Shanmuga Priya, I. Thirunavukkarasu, Shreeranga Bhat
Abstract:
An attempt has been made to design a decoupling controller for systems with more inputs more outputs with dead time in it. The de-coupler is designed for the chemical process industry 3×3 plant transfer function with dead time. The Quantitative Feedback Theory (QFT) based controller has also been designed here for the 2×2 distillation column transfer function. The developed control techniques were simulated using the MATLAB/Simulink. Also, the stability of the process was analyzed, together with the presence of various perturbations in it. Time domain specifications like setting time along with overshoot and oscillations were analyzed to prove the efficiency of the de-coupler method. The load disturbance rejection was tested along with its performance. The QFT control technique was synthesized based on the stability and performance specifications in the presence of uncertainty in time constant of the plant transfer function through sequential loop shaping technique. Further, the energy efficiency of the distillation column was improved by proper tuning of the controller. A distillation column consumes 3% of the total energy consumption of the world. A suitable control technique is very important from an economic point of view. The real time implementation of the process is under process in our laboratory.Keywords: distillation, energy, MIMO process, time delay, robust stability
Procedia PDF Downloads 41619532 Validation of Nutritional Assessment Scores in Prediction of Mortality and Duration of Admission in Elderly, Hospitalized Patients: A Cross-Sectional Study
Authors: Christos Lampropoulos, Maria Konsta, Vicky Dradaki, Irini Dri, Konstantina Panouria, Tamta Sirbilatze, Ifigenia Apostolou, Vaggelis Lambas, Christina Kordali, Georgios Mavras
Abstract:
Objectives: Malnutrition in hospitalized patients is related to increased morbidity and mortality. The purpose of our study was to compare various nutritional scores in order to detect the most suitable one for assessing the nutritional status of elderly, hospitalized patients and correlate them with mortality and extension of admission duration, due to patients’ critical condition. Methods: Sample population included 150 patients (78 men, 72 women, mean age 80±8.2). Nutritional status was assessed by Mini Nutritional Assessment (MNA full, short-form), Malnutrition Universal Screening Tool (MUST) and short Nutritional Appetite Questionnaire (sNAQ). Sensitivity, specificity, positive and negative predictive values and ROC curves were assessed after adjustment for the cause of current admission, a known prognostic factor according to previously applied multivariate models. Primary endpoints were mortality (from admission until 6 months afterwards) and duration of hospitalization, compared to national guidelines for closed consolidated medical expenses. Results: Concerning mortality, MNA (short-form and full) and SNAQ had similar, low sensitivity (25.8%, 25.8% and 35.5% respectively) while MUST had higher sensitivity (48.4%). In contrast, all the questionnaires had high specificity (94%-97.5%). Short-form MNA and sNAQ had the best positive predictive value (72.7% and 78.6% respectively) whereas all the questionnaires had similar negative predictive value (83.2%-87.5%). MUST had the highest ROC curve (0.83) in contrast to the rest questionnaires (0.73-0.77). With regard to extension of admission duration, all four scores had relatively low sensitivity (48.7%-56.7%), specificity (68.4%-77.6%), positive predictive value (63.1%-69.6%), negative predictive value (61%-63%) and ROC curve (0.67-0.69). Conclusion: MUST questionnaire is more advantageous in predicting mortality due to its higher sensitivity and ROC curve. None of the nutritional scores is suitable for prediction of extended hospitalization.Keywords: duration of admission, malnutrition, nutritional assessment scores, prognostic factors for mortality
Procedia PDF Downloads 34919531 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model
Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini
Abstract:
The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures
Procedia PDF Downloads 12619530 AIR SAFE: an Internet of Things System for Air Quality Management Leveraging Artificial Intelligence Algorithms
Authors: Mariangela Viviani, Daniele Germano, Simone Colace, Agostino Forestiero, Giuseppe Papuzzo, Sara Laurita
Abstract:
Nowadays, people spend most of their time in closed environments, in offices, or at home. Therefore, secure and highly livable environmental conditions are needed to reduce the probability of aerial viruses spreading. Also, to lower the human impact on the planet, it is important to reduce energy consumption. Heating, Ventilation, and Air Conditioning (HVAC) systems account for the major part of energy consumption in buildings [1]. Devising systems to control and regulate the airflow is, therefore, essential for energy efficiency. Moreover, an optimal setting for thermal comfort and air quality is essential for people’s well-being, at home or in offices, and increases productivity. Thanks to the features of Artificial Intelligence (AI) tools and techniques, it is possible to design innovative systems with: (i) Improved monitoring and prediction accuracy; (ii) Enhanced decision-making and mitigation strategies; (iii) Real-time air quality information; (iv) Increased efficiency in data analysis and processing; (v) Advanced early warning systems for air pollution events; (vi) Automated and cost-effective m onitoring network; and (vii) A better understanding of air quality patterns and trends. We propose AIR SAFE, an IoT-based infrastructure designed to optimize air quality and thermal comfort in indoor environments leveraging AI tools. AIR SAFE employs a network of smart sensors collecting indoor and outdoor data to be analyzed in order to take any corrective measures to ensure the occupants’ wellness. The data are analyzed through AI algorithms able to predict the future levels of temperature, relative humidity, and CO₂ concentration [2]. Based on these predictions, AIR SAFE takes actions, such as opening/closing the window or the air conditioner, to guarantee a high level of thermal comfort and air quality in the environment. In this contribution, we present the results from the AI algorithm we have implemented on the first s et o f d ata c ollected i n a real environment. The results were compared with other models from the literature to validate our approach.Keywords: air quality, internet of things, artificial intelligence, smart home
Procedia PDF Downloads 9719529 Discovering Event Outliers for Drug as Commercial Products
Authors: Arunas Burinskas, Aurelija Burinskiene
Abstract:
On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.Keywords: drugs, Grubbs' test, outlier, shortage event
Procedia PDF Downloads 13719528 A Biomechanical Perfusion System for Microfluidic 3D Bioprinted Structure
Authors: M. Dimitri, M. Ricci, F. Bigi, M. Romiti, A. Corvi
Abstract:
Tissue engineering has reached a significant milestone with the integration of 3D printing for the creation of complex bioconstructs equipped with vascular networks, crucial for cell maintenance and growth. This study aims to demonstrate the effectiveness of a portable microperfusion system designed to adapt dynamically to the evolving conditions of cell growth within 3D-printed bioconstructs. The microperfusion system was developed to provide a constant and controlled flow of nutrients and oxygen through the integrated vessels in the bioconstruct, replicating in vivo physiological conditions. Through a series of preliminary experiments, we evaluated the system's ability to maintain a favorable environment for cell proliferation and differentiation. Measurements of cell density and viability were performed to monitor the health and functionality of the tissue over time. Preliminary results indicate that the portable microperfusion system not only supports but optimizes cell growth, effectively adapting to changes in metabolic needs during the bioconstruct maturation process. This research opens perspectives in tissue engineering, demonstrating that a portable microperfusion system can be successfully integrated into 3D-printed bioconstructs, promoting sustainable and uniform cell growth. The implications of this study are far-reaching, with potential applications in regenerative medicine and pharmacological research, providing a platform for the development of functional and complex tissues.Keywords: biofabrication, microfluidic perfusion system, 4D bioprinting
Procedia PDF Downloads 3719527 A Comparative Asessment of Some Algorithms for Modeling and Forecasting Horizontal Displacement of Ialy Dam, Vietnam
Authors: Kien-Trinh Thi Bui, Cuong Manh Nguyen
Abstract:
In order to simulate and reproduce the operational characteristics of a dam visually, it is necessary to capture the displacement at different measurement points and analyze the observed movement data promptly to forecast the dam safety. The accuracy of forecasts is further improved by applying machine learning methods to data analysis progress. In this study, the horizontal displacement monitoring data of the Ialy hydroelectric dam was applied to machine learning algorithms: Gaussian processes, multi-layer perceptron neural networks, and the M5-rules algorithm for modelling and forecasting of horizontal displacement of the Ialy hydropower dam (Vietnam), respectively, for analysing. The database which used in this research was built by collecting time series of data from 2006 to 2021 and divided into two parts: training dataset and validating dataset. The final results show all three algorithms have high performance for both training and model validation, but the MLPs is the best model. The usability of them are further investigated by comparison with a benchmark models created by multi-linear regression. The result show the performance which obtained from all the GP model, the MLPs model and the M5-Rules model are much better, therefore these three models should be used to analyze and predict the horizontal displacement of the dam.Keywords: Gaussian processes, horizontal displacement, hydropower dam, Ialy dam, M5-Rules, multi-layer perception neural networks
Procedia PDF Downloads 21919526 A Genetic Algorithm Approach for Multi Constraint Team Orienteering Problem with Time Windows
Authors: Uyanga Sukhbaatar, Ahmed Lbath, Mendamar Majig
Abstract:
The Orienteering Problem is the most known example to start modeling tourist trip design problem. In order to meet tourist’s interest and constraint the OP is becoming more and more complicate to solve. The Multi Constraint Team Orienteering Problem with Time Windows is the last extension of the OP which differentiates from other extensions by including more extra associated constraints. The goal of the MCTOPTW is maximizing tourist’s satisfaction score in same time not to violate any of these constraints. This paper presents a genetic algorithmic approach to tackle the MCTOPTW. The benchmark data from literature is tested by our algorithm and the performance results are compared.Keywords: multi constraint team orienteering problem with time windows, genetic algorithm, tour planning system
Procedia PDF Downloads 62919525 Microwave Assisted Extraction (MAE) of Castor Oil from Castor Bean
Authors: Ghazi Faisal Najmuldeen, Rosli Mohd Yunus, Nurfarahin Bt Harun, Mardhiana Binti Ismail
Abstract:
The microwave extraction has attracted great interest among the researchers. The main virtue of the microwave technique is cost-effective, time saving and simple handling procedure. Castor beans was chosen because of its high content in fatty acid, especially ricinoleic acid. The purpose of this research is to extract the castor oil by using the microwave assisted extraction (MAE) using ethanol as solvent and to investigate the influence of extraction time on castor oil yield and to characterize the main composition of the produced castor oil by using the GC-MS. It was found that there is a direct dependence between the oil yield and the time of extraction as it increases from 45% to 58% as the time increase from 10 min to 60 min. The major components of castor oil detected by GC-MS were ricinoleic acid, linoleic acid and oleic acid.Keywords: microwave assisted extraction (MAE), castor oil, ricinoleic acid, linoleic acid
Procedia PDF Downloads 51219524 Advantages and Disadvantages of Distance Learning in Comparison with Full-time Teaching from the Perspective of Chinese University Students
Authors: Daniel Ecler
Abstract:
The aim of this paper was to find out how Chinese university students perceive distance learning compared to full-time teaching, to reveal its advantages and disadvantages, and to try to find what elements could be implemented in regular full-time teaching in order to make it more effective. Recent events have shown that online teaching has a significant role to play in the field of education and needs to be given increased attention and scrutiny. For this purpose, a research survey was conducted using semi-structured questionnaires, which aimed to determine the attitudes of Chinese university students to the phenomenon of distance learning. The results of this survey revealed that most students prefer distance learning to full-time teaching, mainly because it gives them more freedom to participate in teaching, regardless of the environment in which they are currently located. In conclusion, it is necessary to mention that the possibility to participate virtually in teaching from anywhere is a huge advantage that could become part of regular teaching in the future. However, further research into this issue will be necessary.Keywords: distance learning, full-time teaching, Chinese college students, cultural background
Procedia PDF Downloads 17919523 Vibrations of Springboards: Mode Shape and Time Domain Analysis
Authors: Stefano Frassinelli, Alessandro Niccolai, Riccardo E. Zich
Abstract:
Diving is an important Olympic sport. In this sport, the effective performance of the athlete is related to his capability to interact correctly with the springboard. In fact, the elevation of the jump and the correctness of the dive are influenced by the vibrations of the board. In this paper, the vibrations of the springboard will be analyzed by means of typical tools for vibration analysis: Firstly, a modal analysis will be done on two different models of the springboard, then, these two model and another one will be analyzed with a time analysis, done integrating the equations of motion od deformable bodies. All these analyses will be compared with experimental data measured on a real springboard by means of a 6-axis accelerometer; these measurements are aimed to assess the models proposed. The acquired data will be analyzed both in frequency domain and in time domain.Keywords: springboard analysis, modal analysis, time domain analysis, vibrations
Procedia PDF Downloads 46119522 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant
Authors: John K. Avor, Choong-Koo Chang
Abstract:
The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability
Procedia PDF Downloads 175