Search results for: error metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2403

Search results for: error metrics

303 Improving the Quality of Discussion and Documentation of Advance Care Directives in a Community-Based Resident Primary Care Clinic

Authors: Jason Ceavers, Travis Thompson, Juan Torres, Ramanakumar Anam, Alan Wong, Andrei Carvalho, Shane Quo, Shawn Alonso, Moises Cintron, Ricardo C. Carrero, German Lopez, Vamsi Garimella, German Giese

Abstract:

Introduction: Advance directives (AD) are essential for patients to communicate their wishes when they are not able to. Ideally, these discussions should not occur for the first time when a patient is hospitalized with an acute life-threatening illness. There is a large number of patients who do not have clearly documented ADs, resulting in the misutilization of resources and additional patient harm. This is a nationwide issue, and the Joint Commission has it as one of its national quality metrics. Presented here is a proposed protocol to increase the number of documented AD discussions in a community-based, internal medicine residency primary care clinic in South Florida. Methods: The SMART Aim for this quality improvement project is to increase documentation of AD discussions in the outpatient setting by 25% within three months in medicare patients. A survey was sent to stakeholders (clinic attendings, residents, medical assistants, front desk staff, and clinic managers), asking them for three factors they believed contributed most to the low documentation rate of AD discussions. The two most important factors were time constraints and systems issues (such as lack of a standard method to document ADs and ADs not being uploaded to the chart) which were brought up by 25% and 21.2% of the 32 survey responders, respectively. Pre-intervention data from clinic patients in 2020-2021 revealed 17.05% of patients had clear, actionable ADs documented. To address these issues, an AD pocket card was created to give to patients. One side of the card has a brief explanation of what ADs are. The other side has a column of interventions (cardiopulmonary resuscitation, mechanical ventilation, dialysis, tracheostomy, feeding tube) with boxes patients check off if they want the intervention done, do not want the intervention, do not want to discuss the topic, or need more information. These cards are to be filled out and scanned into their electronic chart to be reviewed by the resident before their appointment. The interventions that patients want more information on will be discussed by the provider. If any changes are made, the card will be re-scanned into their chart. After three months, we will chart review the patients seen in the clinic to determine how many medicare patients have a pocket card uploaded and how many have advance directives discussions documented in a progress note or annual wellness note. If there is not enough time for an AD discussion, a follow-up appointment can be scheduled for that discussion. Discussion: ADs are a crucial part of patient care, and failure to understand a patient’s wishes leads to improper utilization of resources, avoidable litigation, and patient harm. Time constraints and systems issues were identified as two major factors contributing to the lack of advance directive discussion in our community-based resident primary care clinic. Our project aims at increasing the documentation rate for ADs through a simple pocket card intervention. These are self-explanatory, easy to read and allow the patients to clearly express what interventions they desire or what they want to discuss further with their physician.

Keywords: advance directives, community-based, pocket card, primary care clinic

Procedia PDF Downloads 152
302 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 331
301 Particle Swarm Optimization Based Vibration Suppression of a Piezoelectric Actuator Using Adaptive Fuzzy Sliding Mode Controller

Authors: Jin-Siang Shaw, Patricia Moya Caceres, Sheng-Xiang Xu

Abstract:

This paper aims to integrate the particle swarm optimization (PSO) method with the adaptive fuzzy sliding mode controller (AFSMC) to achieve vibration attenuation in a piezoelectric actuator subject to base excitation. The piezoelectric actuator is a complicated system made of ferroelectric materials and its performance can be affected by nonlinear hysteresis loop and unknown system parameters and external disturbances. In this study, an adaptive fuzzy sliding mode controller is proposed for the vibration control of the system, because the fuzzy sliding mode controller is designed to tackle the unknown parameters and external disturbance of the system, and the adaptive algorithm is aimed for fine-tuning this controller for error converging purpose. Particle swarm optimization method is used in order to find the optimal controller parameters for the piezoelectric actuator. PSO starts with a population of random possible solutions, called particles. The particles move through the search space with dynamically adjusted speed and direction that change according to their historical behavior, allowing the values of the particles to quickly converge towards the best solutions for the proposed problem. In this paper, an initial set of controller parameters is applied to the piezoelectric actuator which is subject to resonant base excitation with large amplitude vibration. The resulting vibration suppression is about 50%. Then PSO is applied to search for an optimal controller in the neighborhood of this initial controller. The performance of the optimal fuzzy sliding mode controller found by PSO indeed improves up to 97.8% vibration attenuation. Finally, adaptive version of fuzzy sliding mode controller is adopted for further improving vibration suppression. Simulation result verifies the performance of the adaptive controller with 99.98% vibration reduction. Namely the vibration of the piezoelectric actuator subject to resonant base excitation can be completely annihilated using this PSO based adaptive fuzzy sliding mode controller.

Keywords: adaptive fuzzy sliding mode controller, particle swarm optimization, piezoelectric actuator, vibration suppression

Procedia PDF Downloads 134
300 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure

Authors: Esra Zengin, Sinan Akkar

Abstract:

Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.

Keywords: ground motion selection, scaling, uncertainty, fragility curve

Procedia PDF Downloads 576
299 Hidden Hot Spots: Identifying and Understanding the Spatial Distribution of Crime

Authors: Lauren C. Porter, Andrew Curtis, Eric Jefferis, Susanne Mitchell

Abstract:

A wealth of research has been generated examining the variation in crime across neighborhoods. However, there is also a striking degree of crime concentration within neighborhoods. A number of studies show that a small percentage of street segments, intersections, or addresses account for a large portion of crime. Not surprisingly, a focus on these crime hot spots can be an effective strategy for reducing community level crime and related ills, such as health problems. However, research is also limited in an important respect. Studies tend to use official data to identify hot spots, such as 911 calls or calls for service. While the use of call data may be more representative of the actual level and distribution of crime than some other official measures (e.g. arrest data), call data still suffer from the 'dark figure of crime.' That is, there is most certainly a degree of error between crimes that occur versus crimes that are reported to the police. In this study, we present an alternative method of identifying crime hot spots, that does not rely on official data. In doing so, we highlight the potential utility of neighborhood-insiders to identify and understand crime dynamics within geographic spaces. Specifically, we use spatial video and geo-narratives to record the crime insights of 36 police, ex-offenders, and residents of a high crime neighborhood in northeast Ohio. Spatial mentions of crime are mapped to identify participant-identified hot spots, and these are juxtaposed with calls for service (CFS) data. While there are bound to be differences between these two sources of data, we find that one location, in particular, a corner store, emerges as a hot spot for all three groups of participants. Yet it does not emerge when we examine CFS data. A closer examination of the space around this corner store and a qualitative analysis of narrative data reveal important clues as to why this store may indeed be a hot spot, but not generate disproportionate calls to the police. In short, our results suggest that researchers who rely solely on official data to study crime hot spots may risk missing some of the most dangerous places.

Keywords: crime, narrative, video, neighborhood

Procedia PDF Downloads 226
298 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory

Authors: Kiana Zeighami, Morteza Ozlati Moghadam

Abstract:

Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.

Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping

Procedia PDF Downloads 196
297 Novel Hole-Bar Standard Design and Inter-Comparison for Geometric Errors Identification on Machine-Tool

Authors: F. Viprey, H. Nouira, S. Lavernhe, C. Tournier

Abstract:

Manufacturing of freeform parts may be achieved on 5-axis machine tools currently considered as a common means of production. In particular, the geometrical quality of the freeform parts depends on the accuracy of the multi-axis structural loop, which is composed of several component assemblies maintaining the relative positioning between the tool and the workpiece. Therefore, to reach high quality of the geometries of the freeform parts the geometric errors of the 5 axis machine should be evaluated and compensated, which leads one to master the deviations between the tool and the workpiece (volumetric accuracy). In this study, a novel hole-bar design was developed and used for the characterization of the geometric errors of a RRTTT 5-axis machine tool. The hole-bar standard design is made of Invar material, selected since it is less sensitive to thermal drift. The proposed design allows once to extract 3 intrinsic parameters: one linear positioning and two straightnesses. These parameters can be obtained by measuring the cylindricity of 12 holes (bores) and 11 cylinders located on a perpendicular plane. By mathematical analysis, twelve 3D points coordinates can be identified and correspond to the intersection of each hole axis with the least square plane passing through two perpendicular neighbour cylinders axes. The hole-bar was calibrated using a precision CMM at LNE traceable the SI meter definition. The reversal technique was applied in order to separate the error forms of the hole bar from the motion errors of the mechanical guiding systems. An inter-comparison was additionally conducted between four NMIs (National Metrology Institutes) within the EMRP IND62: JRP-TIM project. Afterwards, the hole-bar was integrated in RRTTT 5-axis machine tool to identify its volumetric errors. Measurements were carried out in real time and combine raw data acquired by the Renishaw RMP600 touch probe and the linear and rotary encoders. The geometric errors of the 5 axis machine were also evaluated by an accurate laser tracer interferometer system. The results were compared to those obtained with the hole bar.

Keywords: volumetric errors, CMM, 3D hole-bar, inter-comparison

Procedia PDF Downloads 370
296 Risk Management in Islamic Micro Finance Credit System for Poverty Alleviation from Qualitative Perspective

Authors: Liyu Adhi Kasari Sulung

Abstract:

Poverty has been a major problem in Indonesia. Islamic micro finance (IMF) named Baitul Maal Wat Tamwil (Bmt) plays a prominent role to eradicate this. Indonesia as the biggest muslim country has many successful applied products such as worldwide adopt group-based lending approach, flexible financing for farmers, and gold pawning. The Problems related to these models are operation risk management and internal control system (ICS). A proper ICS will help an organization in preventing the occurrence of bad financing through detecting error and irregularities in its operation. This study aims to seek a proper risk management scheme of credit system in Bmt and internal control system’s rank for every stage. Risk management variables are obtained at the first In-Depth Interview (IDI) and Focus Group Discussion (FGD) with Shariah supervisory boards, boards of directors, and operational managers. Survey was conducted covering nationwide data; West Java, South Sulawesi, and West Nusa Tenggara. Moreover, Content analysis is employed to build the relationship among these variables. Research Findings shows that risk management Characteristics in Indonesia involves ex ante, credit process, and ex post strategies to deal with risk in credit system. Ex-ante control consists of Shariah compliance, survey, group leader reference, and islamic forming orientation. Then, credit process involves saving, collateral, joint liability, loan repayment, and credit installment controlling. Finally, ex-post control includes shariah evaluation, credit evaluation, grace period and low installment provisions. In addition, internal control order sort three stages by its priority; Credit process as first rank, then ex-post control as second, and ex ante control as the last rank.

Keywords: internal control system, islamic micro finance, poverty, risk management

Procedia PDF Downloads 392
295 Fabrication of Electrospun Microbial Siderophore-Based Nanofibers: A Wound Dressing Material to Inhibit the Wound Biofilm Formation

Authors: Sita Lakshmi Thyagarajan

Abstract:

Nanofibers will leave no field untouched by its scientific innovations; the medical field is no exception. Electrospinning has proven to be an excellent method for the synthesis of nanofibers which, have attracted the interest for many biomedical applications. The formation of biofilms in wounds often leads to chronic infections that are difficult to treat with antibiotics. In order to minimize the biofilms and enhance the wound healing, preparation of potential nanofibers was focused. In this study, siderophore incorporated nanofibers were electrospun using biocompatible polymers onto the collagen scaffold and were fabricated into a biomaterial suitable for the inhibition of biofilm formation. The purified microbial siderophore was blended with Poly-L-lactide (PLLA) and poly (ethylene oxide) PEO in a suitable solvent. Fabrication of siderophore blended nanofibers onto the collagen surface was done using standard protocols. The fabricated scaffold was subjected to physical-chemical characterization. The results indicated that the fabrication processing parameters of nanofiberous scaffold was found to possess the characteristics expected of the potential scaffold with nanoscale morphology and microscale arrangement. The influence of Poly-L-lactide (PLLA) and poly (ethylene oxide) PEO solution concentration, applied voltage, tip-to-collector distance, feeding rate, and collector speed were studied. The optimal parameters such as the ratio of Poly-L-lactide (PLLA) and poly (ethylene oxide) PEO concentration, applied voltage, tip-to-collector distance, feeding rate, collector speed were finalized based on the trial and error experiments. The fibers were found to have a uniform diameter with an aligned morphology. The overall study suggests that the prepared siderophore entrapped nanofibers could be used as a potent tool for wound dressing material for inhibition of biofilm formation.

Keywords: biofilms, electrospinning, nano-fibers, siderophore, tissue engineering scaffold

Procedia PDF Downloads 112
294 Maturity Classification of Oil Palm Fresh Fruit Bunches Using Thermal Imaging Technique

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Reza Ehsani, Hawa Ze Jaffar, Ishak Aris

Abstract:

Ripeness estimation of oil palm fresh fruit is important processes that affect the profitableness and salability of oil palm fruits. The adulthood or ripeness of the oil palm fruits influences the quality of oil palm. Conventional procedure includes physical grading of Fresh Fruit Bunches (FFB) maturity by calculating the number of loose fruits per bunch. This physical classification of oil palm FFB is costly, time consuming and the results may have human error. Hence, many researchers try to develop the methods for ascertaining the maturity of oil palm fruits and thereby, deviously the oil content of distinct palm fruits without the need for exhausting oil extraction and analysis. This research investigates the potential of infrared images (Thermal Images) as a predictor to classify the oil palm FFB ripeness. A total of 270 oil palm fresh fruit bunches from most common cultivar of oil palm bunches Nigresens according to three maturity categories: under ripe, ripe and over ripe were collected. Each sample was scanned by the thermal imaging cameras FLIR E60 and FLIR T440. The average temperature of each bunches were calculated by using image processing in FLIR Tools and FLIR ThermaCAM researcher pro 2.10 environment software. The results show that temperature content decreased from immature to over mature oil palm FFBs. An overall analysis-of-variance (ANOVA) test was proved that this predictor gave significant difference between underripe, ripe and overripe maturity categories. This shows that the temperature as predictors can be good indicators to classify oil palm FFB. Classification analysis was performed by using the temperature of the FFB as predictors through Linear Discriminant Analysis (LDA), Mahalanobis Discriminant Analysis (MDA), Artificial Neural Network (ANN) and K- Nearest Neighbor (KNN) methods. The highest overall classification accuracy was 88.2% by using Artificial Neural Network. This research proves that thermal imaging and neural network method can be used as predictors of oil palm maturity classification.

Keywords: artificial neural network, maturity classification, oil palm FFB, thermal imaging

Procedia PDF Downloads 346
293 Exploring the Prebiotic Potential of Glucosamine

Authors: Shilpi Malik, Ramneek Kaur, Archita Gupta, Deepshikha Yadav, Ashwani Mathur, Manisha Singh

Abstract:

Glucosamine (GS) is the most abundant naturally occurring amino monosaccharide and is normally produced in human body via cellular glucose metabolism. It is regarded as the building block of cartilage matrix and is also an essential component of cartilage matrix repair mechanism. Besides that, it can also be explored for its prebiotic potential as many bacterial species are known to utilize the amino sugar by acquiring them to form peptidoglycans and lipopolysaccharides in the bacterial cell wall. Glucosamine can therefore be considered for its fermentation by bacterial species present in the gut. Current study is focused on exploring the potential of glucosamine as prebiotic. The studies were done to optimize considerable concentration of GS to reach GI tract and being fermented by the complex gut microbiota and food grade GS was added to various Simulated Fluids of Gastro-Intestinal Tract (GIT) such as Simulated Saliva, Gastric Fluid (Fast and Fed State), Colonic fluid, etc. to detect its degradation. Since it was showing increase in microbial growth (CFU) with time, GS was Further, encapsulated to increase its residential time in the gut, which exhibited improved resistance to the simulated Gut conditions. Moreover, prepared microspehres were optimized and characterized for their encapsulation efficiency and toxicity. To further substantiate the prebiotic activity of Glucosamine, studies were also performed to determine the effect of Glucosamine on the known probiotic bacterial species, i.e. Lactobacillus delbrueckii (MTCC 911) and Bifidobacteriumbifidum (MTCC 5398). Culture conditions for glucosamine will be added in MRS media in anaerobic tube at 0.20%, 0.40%, 0.60%, 0.80%, and 1.0%, respectively. MRS media without GS was included in this experiment as the control. All samples were autoclaved at 118° C for 15 min. Active culture was added at 5% (v/v) to each anaerobic tube after cooling to room temperature and incubated at 37° C then determined biomass and pH and viable count at incubation 18h. The experiment was completed in triplicate and the results were presented as Mean ± SE (Standard error).The experimental results are conclusive and suggest Glucosamine to hold prebiotic properties.

Keywords: gastro intestinal tract, microspheres, peptidoglycans, simulated fluid

Procedia PDF Downloads 317
292 Co2e Sequestration via High Yield Crops and Methane Capture for ZEV Sustainable Aviation Fuel

Authors: Bill Wason

Abstract:

143 Crude Palm Oil Coop mills on Sumatra Island are participating in a program to transfer land from defaulted estates to small farmers while improving the sustainability of palm production to allow for biofuel & food production. GCarbon will be working with farmers to transfer technology, fertilizer, and trees to double the yield from the current baseline of 3.5 tons to at least 7 tons of oil per ha (25 tons of fruit bunches). This will be measured via evaluation of yield comparisons between participant and non-participant farms. We will also capture methane from Palm Oil Mill Effluent (POME)throughbelt press filtering. Residues will be weighed and a formula used to estimate methane emission reductions based on methodologies developed by other researchers. GCarbon will also cover mill ponds with a non-permeable membrane and collect methane for energy or steam production. A system for accelerating methane production involving ozone and electro-flocculation will be tested to intensifymethane generation and reduce the time for wastewater treatment. A meta-analysis of research on sweet potatoes and sorghum as rotation crops will look at work in the Rio Grande do Sul, Brazil where5 ha. oftest plots of industrial sweet potato have achieved yields of 60 tons and 40 tons per ha. from 2 harvests in one year (100 MT/ha./year). Field trials will be duplicated in Bom Jesus Das Selvas, Maranhaothat will test varieties of sweet potatoes to measure yields and evaluate disease risks in a very different soil and climate of NE Brazil. Hog methane will also be captured. GCarbon Brazil, Coop Sisal, and an Australian research partner will plant several varieties of agave and use agronomic procedures to get yields of 880 MT per ha. over 5 years. They will also plant new varieties expected to get 3500 MT of biomass after 5 years (176-700 MT per ha. per year). The goal is to show that the agave can adapt to Brazil’s climate without disease problems. The study will include a field visit to growing sites in Australia where agave is being grown commercially for biofuels production. Researchers will measure the biomass per hectare at various stages in the growing cycle, sugar content at harvest, and other metrics to confirm the yield of sugar per ha. is up to 10 times greater than sugar cane. The study will look at sequestration rates from measuring soil carbon and root accumulation in various plots in Australia to confirm carbon sequestered from 5 years of production. The agave developer estimates that 60-80 MT of sequestration per ha. per year occurs from agave. The three study efforts in 3 different countries will define a feedstock pathway for jet fuel that involves very high yield crops that can produce 2 to 10 times more biomass than current assumptions. This cost-effective and less land intensive strategy will meet global jet fuel demand and produce huge quantities of food for net zero aviation and feeding 9-10 billion people by 2050

Keywords: zero emission SAF, methane capture, food-fuel integrated refining, new crops for SAF

Procedia PDF Downloads 87
291 Evaluation of Key Performance Indicators as Determinants of Dividend Paid on Ordinary Shares in Nigeria Banking Sector

Authors: Oliver Ikechukwu Inyiama, Boniface Uche Ugwuanyi

Abstract:

The aim of the research is to evaluate the key financial performance indicators that help both managers and their shareholders of Nigerian Banks to determine the appropriate dividend payout to their ordinary shareholders in an accounting year. Profitability, total asset, and earnings of commercial banks were selected as key performance indicators in Nigeria Banking Sector. They represent the independent variables of the study while dividend per share is the proxy for the dividend paid on ordinary shares which represent the dependent variable. The effect of profitability, total asset and earnings on dividend per share were evaluated through the ordinary least square method of multiple regression analysis. Test for normality of frequency distribution was conducted through descriptive statistics such as Jacque Bera Statistic, skewness and kurtosis. Rate of dividend payout was subsequently applied as an alternate dependent variable to test for robustness of the earlier results. The 64% adjusted R-squared of the pooled data indicates that profitability, total asset, and earnings explain the variation in dividend per share during the period under research while the remaining 36% variation in dividend per share could be explained by changes in other variables not captured by this study as well as the error term. The study concentrated on four leading Nigeria Commercial Banks namely; First Bank of Nigeria Plc, GTBank Plc, United Bank for Africa Plc and Zenith International Bank Plc. Dividend per share was found to be positively affected by total assets and earnings of the commercial banks. However, profitability which was proxied by profit after tax had a negative effect on dividend per share. The implication of the findings is that commercial banks in Nigeria pay more dividend when they are having a dwindling fortune in order to retain the confidence of the shareholders provided their gross earnings and size is on the increase. Therefore, the management and board of directors of Nigeria commercial banks should apply decent marketing strategies to enhance earnings through investment in profitable ventures for an improved dividend payout rate.

Keywords: assets, banks, indicators, performance, profitability, shares

Procedia PDF Downloads 152
290 Using Statistical Significance and Prediction to Test Long/Short Term Public Services and Patients' Cohorts: A Case Study in Scotland

Authors: Raptis Sotirios

Abstract:

Health and social care (HSc) services planning and scheduling are facing unprecedented challenges due to the pandemic pressure and also suffer from unplanned spending that is negatively impacted by the global financial crisis. Data-driven can help to improve policies, plan and design services provision schedules using algorithms assist healthcare managers’ to face unexpected demands using fewer resources. The paper discusses services packing using statistical significance tests and machine learning (ML) to evaluate demands similarity and coupling. This is achieved by predicting the range of the demand (class) using ML methods such as CART, random forests (RF), and logistic regression (LGR). The significance tests Chi-Squared test and Student test are used on data over a 39 years span for which HSc services data exist for services delivered in Scotland. The demands are probabilistically associated through statistical hypotheses that assume that the target service’s demands are statistically dependent on other demands as a NULL hypothesis. This linkage can be confirmed or not by the data. Complementarily, ML methods are used to linearly predict the above target demands from the statistically found associations and extend the linear dependence of the target’s demand to independent demands forming, thus groups of services. Statistical tests confirm ML couplings making the prediction also statistically meaningful and prove that a target service can be matched reliably to other services, and ML shows these indicated relationships can also be linear ones. Zero paddings were used for missing years records and illustrated better such relationships both for limited years and in the entire span offering long term data visualizations while limited years groups explained how well patients numbers can be related in short periods or can change over time as opposed to behaviors across more years. The prediction performance of the associations is measured using Receiver Operating Characteristic(ROC) AUC and ACC metrics as well as the statistical tests, Chi-Squared and Student. Co-plots and comparison tables for RF, CART, and LGR as well as p-values and Information Exchange(IE), are provided showing the specific behavior of the ML and of the statistical tests and the behavior using different learning ratios. The impact of k-NN and cross-correlation and C-Means first groupings is also studied over limited years and the entire span. It was found that CART was generally behind RF and LGR, but in some interesting cases, LGR reached an AUC=0 falling below CART, while the ACC was as high as 0.912, showing that ML methods can be confused padding or by data irregularities or outliers. On average, 3 linear predictors were sufficient, LGR was found competing RF well, and CART followed with the same performance at higher learning ratios. Services were packed only if when significance level(p-value) of their association coefficient was more than 0.05. Social factors relationships were observed between home care services and treatment of old people, birth weights, alcoholism, drug abuse, and emergency admissions. The work found that different HSc services can be well packed as plans of limited years, across various services sectors, learning configurations, as confirmed using statistical hypotheses.

Keywords: class, cohorts, data frames, grouping, prediction, prob-ability, services

Procedia PDF Downloads 215
289 Performance Evaluation of Production Schedules Based on Process Mining

Authors: Kwan Hee Han

Abstract:

External environment of enterprise is rapidly changing majorly by global competition, cost reduction pressures, and new technology. In these situations, production scheduling function plays a critical role to meet customer requirements and to attain the goal of operational efficiency. It deals with short-term decision making in the production process of the whole supply chain. The major task of production scheduling is to seek a balance between customer orders and limited resources. In manufacturing companies, this task is so difficult because it should efficiently utilize resource capacity under the careful consideration of many interacting constraints. At present, many computerized software solutions have been utilized in many enterprises to generate a realistic production schedule to overcome the complexity of schedule generation. However, most production scheduling systems do not provide sufficient information about the validity of the generated schedule except limited statistics. Process mining only recently emerged as a sub-discipline of both data mining and business process management. Process mining techniques enable the useful analysis of a wide variety of processes such as process discovery, conformance checking, and bottleneck analysis. In this study, the performance of generated production schedule is evaluated by mining event log data of production scheduling software system by using the process mining techniques since every software system generates event logs for the further use such as security investigation, auditing and error bugging. An application of process mining approach is proposed for the validation of the goodness of production schedule generated by scheduling software systems in this study. By using process mining techniques, major evaluation criteria such as utilization of workstation, existence of bottleneck workstations, critical process route patterns, and work load balance of each machine over time are measured, and finally, the goodness of production schedule is evaluated. By using the proposed process mining approach for evaluating the performance of generated production schedule, the quality of production schedule of manufacturing enterprises can be improved.

Keywords: data mining, event log, process mining, production scheduling

Procedia PDF Downloads 268
288 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 83
287 Computer Simulation Approach in the 3D Printing Operations of Surimi Paste

Authors: Timilehin Martins Oyinloye, Won Byong Yoon

Abstract:

Simulation technology is being adopted in many industries, with research focusing on the development of new ways in which technology becomes embedded within production, services, and society in general. 3D printing (3DP) technology is fast developing in the food industry. However, the limited processability of high-performance material restricts the robustness of the process in some cases. Significantly, the printability of materials becomes the foundation for extrusion-based 3DP, with residual stress being a major challenge in the printing of complex geometry. In many situations, the trial-a-error method is being used to determine the optimum printing condition, which results in time and resource wastage. In this report, the analysis of 3 moisture levels for surimi paste was investigated for an optimum 3DP material and printing conditions by probing its rheology, flow characteristics in the nozzle, and post-deposition process using the finite element method (FEM) model. Rheological tests revealed that surimi pastes with 82% moisture are suitable for 3DP. According to the FEM model, decreasing the nozzle diameter from 1.2 mm to 0.6 mm, increased the die swell from 9.8% to 14.1%. The die swell ratio increased due to an increase in the pressure gradient (1.15107 Pa to 7.80107 Pa) at the nozzle exit. The nozzle diameter influenced the fluid properties, i.e., the shear rate, velocity, and pressure in the flow field, as well as the residual stress and the deformation of the printed sample, according to FEM simulation. The post-printing stability of the model was investigated using the additive layer manufacturing (ALM) model. The ALM simulation revealed that the residual stress and total deformation of the sample were dependent on the nozzle diameter. A small nozzle diameter (0.6 mm) resulted in a greater total deformation (0.023), particularly at the top part of the model, which eventually resulted in the sample collapsing. As the nozzle diameter increased, the accuracy of the model improved until the optimum nozzle size (1.0 mm). Validation with 3D-printed surimi products confirmed that the nozzle diameter was a key parameter affecting the geometry accuracy of 3DP of surimi paste.

Keywords: 3D printing, deformation analysis, die swell, numerical simulation, surimi paste

Procedia PDF Downloads 55
286 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations

Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.

Abstract:

Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.

Keywords: gamma incomplete, ewes, shape curves, modeling

Procedia PDF Downloads 57
285 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 75
284 Voluntary Disclosure Of Sustainability Information In Malaysian Federal-level Statutory Bodies

Authors: Siti Zabedah Saidin, Aidi Ahmi, Azharudin Ali, Wan Norhayati Wan Ahmad

Abstract:

In today's increasingly complex and interconnected world, the concept of sustainability has transcended mere corporate social responsibility, evolving into a fundamental driver of organizational behaviour and disclosure. This content analysis study delves into the Malaysian federal-level statutory bodies’ annual report for the year 2021, aiming to elucidate the extent of sustainability disclosures within the non-financial sections of these reports. The escalating global emphasis on sustainability has prompted organizations to embrace transparency as a means to demonstrate their commitment to environmental, social, and governance (ESG) considerations. Voluntary sustainability disclosure has emerged as a crucial channel through which organizations communicate their efforts, initiatives, and impacts in these areas, thereby fostering trust and accountability with stakeholders. The study aims to identify and examine the types of sustainability information disclosed voluntarily by the federal-level statutory bodies, concentrating on the non-financial sections of the annual reports. To achieve this, the study adopts a simplified disclosure index, a pragmatic tool that quantifies the extent of sustainability reporting in a standardized manner. Using convenience sampling, the study selects a sample of annual reports from the federal-level statutory bodies in Malaysia, as provided on their respective websites. The content analysis is centred on the non-financial sections of these reports, allowing for an in-depth exploration of sustainability disclosures. The findings of the study present the extent to which Malaysian federal-level statutory bodies embrace sustainability reporting. Through thorough content analysis, the study uncovered diverse dimensions of sustainability information, encompassing environmental impact assessments, social engagement endeavours, and governance frameworks. This reveals a deliberate effort by these bodies to encapsulate their holistic organizational contributions and challenges, transcending traditional financial metrics. This research contributes to the existing literature by providing insights into the evolving landscape of sustainability disclosure practices among Malaysian federal-level statutory bodies. The findings underline the proactive nature of these bodies in voluntarily sharing sustainability-related information, reflecting their recognition of the interconnectedness between organizational success and societal well-being. Furthermore, the study underscores the potential influence of regulatory guidelines and societal expectations in shaping the extent and nature of voluntary sustainability disclosures. Organizations are not merely responding to regulatory mandates but are actively aligning with global sustainability goals and stakeholder expectations. As organizations continue to navigate the intricate web of stakeholder expectations and sustainability imperatives, this study enriches the discourse surrounding transparency and sustainability reporting. The analysis emphasizes the important role of non-financial disclosures in portraying a holistic organizational narrative. In an era where stakeholders demand accountability, and the interconnectedness of global challenges necessitates collaborative action, the voluntary disclosure of sustainability information stands as a testament to the commitment of Malaysian federal-level statutory bodies in shaping a more sustainable future.

Keywords: voluntary disclosure, sustainability information, annual report, federal-level statutory body

Procedia PDF Downloads 45
283 A Study on Thermal and Flow Characteristics by Solar Radiation for Single-Span Greenhouse by Computational Fluid Dynamics Simulation

Authors: Jonghyuk Yoon, Hyoungwoon Song

Abstract:

Recently, there are lots of increasing interest in a smart farming that represents application of modern Information and Communication Technologies (ICT) into agriculture since it provides a methodology to optimize production efficiencies by managing growing conditions of crops automatically. In order to obtain high performance and stability for smart greenhouse, it is important to identify the effect of various working parameters such as capacity of ventilation fan, vent opening area and etc. In the present study, a 3-dimensional CFD (Computational Fluid Dynamics) simulation for single-span greenhouse was conducted using the commercial program, Ansys CFX 18.0. The numerical simulation for single-span greenhouse was implemented to figure out the internal thermal and flow characteristics. In order to numerically model solar radiation that spread over a wide range of wavelengths, the multiband model that discretizes the spectrum into finite bands of wavelength based on Wien’s law is applied to the simulation. In addition, absorption coefficient of vinyl varied with the wavelength bands is also applied based on Beer-Lambert Law. To validate the numerical method applied herein, the numerical results of the temperature at specific monitoring points were compared with the experimental data. The average error rates (12.2~14.2%) between them was shown and numerical results of temperature distribution are in good agreement with the experimental data. The results of the present study can be useful information for the design of various greenhouses. This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry and Fisheries (IPET) through Advanced Production Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA)(315093-03).

Keywords: single-span greenhouse, CFD (computational fluid dynamics), solar radiation, multiband model, absorption coefficient

Procedia PDF Downloads 122
282 Patient Safety Culture in Brazilian Hospitals from Nurse's Team Perspective

Authors: Carmen Silvia Gabriel, Dsniele Bernardi da Costa, Andrea Bernardes, Sabrina Elias Mikael, Daniele da Silva Ramos

Abstract:

The goal of this quantitative study is to investigate patient safety culture from the perspective of professional from the hospital nursing team.It was conducted in two Brazilian hospitals,.The sample included 282 nurses Data collection occurred in 2013, through the questionnaire Hospital Survey on Patient Safety Culture.Based on the assessment of the dimensions is stressed that, in the dimension teamwork across hospital units, 69.4% of professionals agree that when a lot of work needs to be done quickly, they work together as a team; about the dimension supervisor/ manager expectations and actions promoting safety, 70.2% agree that their supervisor overlooks patient safety problems.Related to organizational learning and continuous improvement, 56.5% agree that there is evaluation of the effectiveness of the changes after its implementation.On hospital management support for patient safety, 52.8% refer that the actions of hospital management show that patient safety is a top priority.On the overall perception of patient safety, 57.2% disagree that patient safety is never compromised due to higher amount of work to be completed.In what refers to feedback and communication about error, 57.7% refer that always and usually receive such information. Relative to communication openness, 42.9% said they never or rarely feel free to question the decisions / actions of their superiors.On frequency of event reporting, 64.7% said often and always notify events with no damages to patients..About teamwork across hospital units is noted similarity between the percentages of agreement and disagreement, as on the item there is a good cooperation among hospital units that need to work together, that indicates 41.4% and 40.5% respectively.Related to adequacy of professionals, 77.8 % disagree on the existence of sufficient amount of employees to do the job, 52.4% agree that shift changes are problematic for patients. On nonpunitive response to errors, 71.7% indicate that when an event is reported it seems that the focus is on the person.On the patient safety grade of the institution, 41.6 % classified it as very good. it is concluded that there are positive points in the safety culture, and some weaknesses as a punitive culture and impaired patient safety due to work overload .

Keywords: quality of health care, health services evaluation, safety culture, patient safety, nursing team

Procedia PDF Downloads 289
281 A Preliminary Kinematic Comparison of Vive and Vicon Systems for the Accurate Tracking of Lumbar Motion

Authors: Yaghoubi N., Moore Z., Van Der Veen S. M., Pidcoe P. E., Thomas J. S., Dexheimer B.

Abstract:

Optoelectronic 3D motion capture systems, such as the Vicon kinematic system, are widely utilized in biomedical research to track joint motion. These systems are considered powerful and accurate measurement tools with <2 mm average error. However, these systems are costly and may be difficult to implement and utilize in a clinical setting. 3D virtual reality (VR) is gaining popularity as an affordable and accessible tool to investigate motor control and perception in a controlled, immersive environment. The HTC Vive VR system includes puck-style trackers that seamlessly integrate into its VR environments. These affordable, wireless, lightweight trackers may be more feasible for clinical kinematic data collection. However, the accuracy of HTC Vive Trackers (3.0), when compared to optoelectronic 3D motion capture systems, remains unclear. In this preliminary study, we compared the HTC Vive Tracker system to a Vicon kinematic system in a simulated lumbar flexion task. A 6-DOF robot arm (SCORBOT ER VII, Eshed Robotec/RoboGroup, Rosh Ha’Ayin, Israel) completed various reaching movements to mimic increasing levels of hip flexion (15°, 30°, 45°). Light reflective markers, along with one HTC Vive Tracker (3.0), were placed on the rigid segment separating the elbow and shoulder of the robot. We compared position measures simultaneously collected from both systems. Our preliminary analysis shows no significant differences between the Vicon motion capture system and the HTC Vive tracker in the Z axis, regardless of hip flexion. In the X axis, we found no significant differences between the two systems at 15 degrees of hip flexion but minimal differences at 30 and 45 degrees, ranging from .047 cm ± .02 SE (p = .03) at 30 degrees hip flexion to .194 cm ± .024 SE (p < .0001) at 45 degrees of hip flexion. In the Y axis, we found a minimal difference for 15 degrees of hip flexion only (.743 cm ± .275 SE; p = .007). This preliminary analysis shows that the HTC Vive Tracker may be an appropriate, affordable option for gross motor motion capture when the Vicon system is not available, such as in clinical settings. Further research is needed to compare these two motion capture systems in different body poses and for different body segments.

Keywords: lumbar, vivetracker, viconsystem, 3dmotion, ROM

Procedia PDF Downloads 86
280 Prediction of Ionic Liquid Densities Using a Corresponding State Correlation

Authors: Khashayar Nasrifar

Abstract:

Ionic liquids (ILs) exhibit particular properties exemplified by extremely low vapor pressure and high thermal stability. The properties of ILs can be tailored by proper selection of cations and anions. As such, ILs are appealing as potential solvents to substitute traditional solvents with high vapor pressure. One of the IL properties required in chemical and process design is density. In developing corresponding state liquid density correlations, scaling hypothesis is often used. The hypothesis expresses the temperature dependence of saturated liquid densities near the vapor-liquid critical point as a function of reduced temperature. Extending the temperature dependence, several successful correlations were developed to accurately correlate the densities of normal liquids from the triple point to a critical point. Applying mixing rules, the liquid density correlations are extended to liquid mixtures as well. ILs are not molecular liquids, and they are not classified among normal liquids either. Also, ILs are often used where the condition is far from equilibrium. Nevertheless, in calculating the properties of ILs, the use of corresponding state correlations would be useful if no experimental data were available. With well-known generalized saturated liquid density correlations, the accuracy in predicting the density of ILs is not that good. An average error of 4-5% should be expected. In this work, a data bank was compiled. A simplified and concise corresponding state saturated liquid density correlation is proposed by phenomena-logically modifying reduced temperature using the temperature-dependence for an interacting parameter of the Soave-Redlich-Kwong equation of state. This modification improves the temperature dependence of the developed correlation. Parametrization was next performed to optimize the three global parameters of the correlation. The correlation was then applied to the ILs in our data bank with satisfactory predictions. The correlation of IL density applied at 0.1 MPa and was tested with an average uncertainty of around 2%. No adjustable parameter was used. The critical temperature, critical volume, and acentric factor were all required. Methods to extend the predictions to higher pressures (200 MPa) were also devised. Compared to other methods, this correlation was found more accurate. This work also presents the chronological order of developing such correlations dealing with ILs. The pros and cons are also expressed.

Keywords: correlation, corresponding state principle, ionic liquid, density

Procedia PDF Downloads 117
279 Prospects of Low Immune Response Transplants Based on Acellular Organ Scaffolds

Authors: Inna Kornienko, Svetlana Guryeva, Anatoly Shekhter, Elena Petersen

Abstract:

Transplantation is an effective treatment option for patients suffering from different end-stage diseases. However, it is plagued by a constant shortage of donor organs and the subsequent need of a lifelong immunosuppressive therapy for the patient. Currently some researchers look towards using of pig organs to replace human organs for transplantation since the matrix derived from porcine organs is a convenient substitute for the human matrix. As an initial step to create a new ex vivo tissue engineered model, optimized protocols have been created to obtain organ-specific acellular matrices and evaluated their potential as tissue engineered scaffolds for culture of normal cells and tumor cell lines. These protocols include decellularization by perfusion in a bioreactor system and immersion-agitation on an orbital shaker with use of various detergents (SDS, Triton X-100) and freezing. Complete decellularization – in terms of residual DNA amount – is an important predictor of probability of immune rejection of materials of natural origin. However, the signs of cellular material may still remain within the matrix even after harsh decellularization protocols. In this regard, the matrices obtained from tissues of low-immunogenic pigs with α3Galactosyl-tranferase gene knock out (GalT-KO) may be a promising alternative to native animal sources. The research included a study of induced effect of frozen and fresh fragments of GalT-KO skin on healing of full-thickness plane wounds in 80 rats. Commercially available wound dressings (Ksenoderm, Hyamatrix and Alloderm) as well as allogenic skin were used as a positive control and untreated wounds were analyzed as a negative control. The results were evaluated on the 4th day after grafting, which corresponds to the time of start of normal wound epithelization. It has been shown that a non-specific immune response in models treated with GalT-Ko pig skin was milder than in all the control groups. Research has been performed to measure technical skin characteristics: stiffness and elasticity properties, corneometry, tevametry, and cutometry. These metrics enabled the evaluation of hydratation level, corneous layer husking level, as well as skin elasticity and micro- and macro-landscape. These preliminary data may contribute to development of personalized transplantable organs from GalT-Ko pigs with significantly limited potential of immune rejection. By applying growth factors to a decellularized skin sample it is possible to achieve various regenerative effects based on the particular situation. In this particular research BMP2 and Heparin-binding EGF-like growth factor have been used. Ideally, a bioengineered organ must be biocompatible, non-immunogenic and support cell growth. Porcine organs are attractive for xenotransplantation if severe immunologic concerns can be bypassed. The results indicate that genetically modified pig tissues with knock-outed α3Galactosyl-tranferase gene may be used for production of low-immunogenic matrix suitable for transplantation.

Keywords: decellularization, low-immunogenic, matrix, scaffolds, transplants

Procedia PDF Downloads 269
278 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 70
277 Correlation Study between Clinical and Radiological Findings in Knee Osteoarthritis

Authors: Nabil A. A. Mohamed, Alaa A. A. Balbaa, Khaled E. Ayad

Abstract:

Osteoarthritis (OA) of the knee is the most common form of arthritis and leads to more activity limitations (e.g., disability in walking and stair climbing) than any other disease, especially in the elderly. Recently, impaired proprioceptive accuracy of the knee has been proposed as a local factor in the onset and progression of radiographic knee OA (ROA). Purpose: To compare the clinical and radiological findings in healthy with that of knee OA. Also, to determine if there is a correlation between the clinical and radiological findings in patients with knee OA. Subjects: Fifty one patients diagnosed as unilateral or bilateral knee OA with age ranged between 35-70 years, from both gender without any previous history of knee trauma or surgery, and twenty one normal subjects with age ranged from 35 - 68 years. METHODS: peak torque/body weight (PT/BW) was recorded from knee extensors at isokinetic isometric mode at angle of 45 degree. Also, the Absolute Angular Error was recorded at 45O and 30O to measure joint position sense (JPS). They made anteroposterior (AP) plain X-rays from standing semiflexed knee position and their average score of Timed Up and Go test(TUG) and WOMAC were recorded as a measure of knee pain, stiffness and function. Comparison between the mean values of different variables in the two groups was performed using unpaired student t test. The P value less or equal to 0.05 was considered significant. Results: There were significant differences between the studied variables between the experimental and control groups except the values of AAE at 30O. Also, there were no significant correlation between the clinical findings (pain, function, muscle strength and proprioception) and the severity of arthritic changes in X-rays. CONCLUSION: From the finding of the current study we can conclude that there were a significant difference between the both groups in all studied parameters (the WOMAC, functional level, quadriceps muscle strength and the joint proprioception). Also this study did not support the dependency on radiological findings in management of knee OA as the radiological features did not necessarily indicate the level of structural damage of patients with knee OA and we should consider the clinical features in our treatment plan.

Keywords: joint position sense, peak torque, proprioception, radiological knee osteoarthritis

Procedia PDF Downloads 292
276 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 57
275 Organ Dose Calculator for Fetus Undergoing Computed Tomography

Authors: Choonsik Lee, Les Folio

Abstract:

Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.

Keywords: computed tomography, fetal dose, pregnant women, radiation dose

Procedia PDF Downloads 124
274 Comparison of the Hospital Patient Safety Culture between Bulgarian, Croatian and American: Preliminary Results

Authors: R. Stoyanova, R. Dimova, M. Tarnovska, T. Boeva, R. Dimov, I. Doykov

Abstract:

Patient safety culture (PSC) is an essential component of quality of healthcare. Improving PSC is considered a priority in many developed countries. Specialized software platform for registration and evaluation of hospital patient safety culture has been developed with the support of the Medical University Plovdiv Project №11/2017. The aim of the study is to assess the status of PSC in Bulgarian hospitals and to compare it to that in USA and Croatian hospitals. Methods: The study was conducted from June 01 to July 31, 2018 using the web-based Bulgarian Version of the Hospital Survey on Patient Safety Culture Questionnaire (B-HSOPSC). Two hundred and forty-eight medical professionals from different hospitals in Bulgaria participated in the study. To quantify the differences of positive scores distributions for each of the 42 HSOPSC items between Bulgarian, Croatian and USA samples, the x²-test was applied. The research hypothesis assumed that there are no significant differences between the Bulgarian, Croatian and US PSCs. Results: The results revealed 14 significant differences in the positive scores between the Bulgarian and Croatian PSCs and 15 between the Bulgarian and the USA PSC, respectively. Bulgarian medical professionals provided less positive responses to 12 items compared with Croatian and USA respondents. The Bulgarian respondents were more positive compared to Croatians on the feedback and communication of medical errors (Items - C1, C4, C5) as well as on the employment of locum staff (A7) and the frequency of reported mistakes (D1). Bulgarian medical professionals were more positive compared with their USA colleagues on the communication of information at shift handover and across hospital units (F5, F7). The distribution of positive scores on items: ‘Staff worries that their mistakes are kept in their personnel file’ (RA16), ‘Things ‘fall between the cracks’ when transferring patients from one unit to another’ (RF3) and ‘Shift handovers are problematic for patients in this hospital’ (RF11) were significantly higher among Bulgarian respondents compared with Croatian and US respondents. Conclusions: Significant differences of positive scores distribution were found between Bulgarian and USA PSC on one hand and between Bulgarian and Croatian on the other. The study reveals that distribution of positive responses could be explained by the cultural, organizational and healthcare system differences.

Keywords: patient safety culture, healthcare, HSOPSC, medical error

Procedia PDF Downloads 128