Search results for: nonstandard fitted operator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 790

Search results for: nonstandard fitted operator

250 Improving Trainings of Mineral Processing Operators Through Gamification and Modelling and Simulation

Authors: Pedro A. S. Bergamo, Emilia S. Streng, Jan Rosenkranz, Yousef Ghorbani

Abstract:

Within the often-hazardous mineral industry, simulation training has speedily gained appreciation as an important method of increasing site safety and productivity through enhanced operator skill and knowledge. Performance calculations related to froth flotation, one of the most important concentration methods, is probably the hardest topic taught during the training of plant operators. Currently, most training teach those skills by traditional methods like slide presentations and hand-written exercises with a heavy focus on memorization. To optimize certain aspects of these pieces of training, we developed “MinFloat”, which teaches the operation formulas of the froth flotation process with the help of gamification. The simulation core based on a first-principles flotation model was implemented in Unity3D and an instructor tutoring system was developed, which presents didactic content and reviews the selected answers. The game was tested by 25 professionals with extensive experience in the mining industry based on a questionnaire formulated for training evaluations. According to their feedback, the game scored well in terms of quality, didactic efficacy and inspiring character. The feedback of the testers on the main target audience and the outlook of the mentioned solution is presented. This paper aims to provide technical background on the construction of educational games for the mining industry besides showing how feedback from experts can more efficiently be gathered thanks to new technologies such as online forms.

Keywords: training evaluation, simulation based training, modelling, and simulation, froth flotation

Procedia PDF Downloads 96
249 Bi-Criteria Vehicle Routing Problem for Possibility Environment

Authors: Bezhan Ghvaberidze

Abstract:

A multiple criteria optimization approach for the solution of the Fuzzy Vehicle Routing Problem (FVRP) is proposed. For the possibility environment the levels of movements between customers are calculated by the constructed simulation interactive algorithm. The first criterion of the bi-criteria optimization problem - minimization of the expectation of total fuzzy travel time on closed routes is constructed for the FVRP. A new, second criterion – maximization of feasibility of movement on the closed routes is constructed by the Choquet finite averaging operator. The FVRP is reduced to the bi-criteria partitioning problem for the so called “promising” routes which were selected from the all admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in the real-time computing. For the numerical solution of the bi-criteria partitioning problem the -constraint approach is used. An exact algorithm is implemented based on D. Knuth’s Dancing Links technique and the algorithm DLX. The Main objective was to present the new approach for FVRP, when there are some difficulties while moving on the roads. This approach is called FVRP for extreme conditions (FVRP-EC) on the roads. Also, the aim of this paper was to construct the solving model of the constructed FVRP. Results are illustrated on the numerical example where all Pareto-optimal solutions are found. Also, an approach for more complex model FVRP with time windows was developed. A numerical example is presented in which optimal routes are constructed for extreme conditions on the roads.

Keywords: combinatorial optimization, Fuzzy Vehicle routing problem, multiple objective programming, possibility theory

Procedia PDF Downloads 457
248 Diagnostic Accuracy of the Tuberculin Skin Test for Tuberculosis Diagnosis: Interest of Using ROC Curve and Fagan’s Nomogram

Authors: Nouira Mariem, Ben Rayana Hazem, Ennigrou Samir

Abstract:

Background and aim: During the past decade, the frequency of extrapulmonary forms of tuberculosis has increased. These forms are under-diagnosed using conventional tests. The aim of this study was to evaluate the performance of the Tuberculin Skin Test (TST) for the diagnosis of tuberculosis, using the ROC curve and Fagan’s Nomogram methodology. Methods: This was a case-control, multicenter study in 11 anti-tuberculosis centers in Tunisia, during the period from June to November2014. The cases were adults aged between 18 and 55 years with confirmed tuberculosis. Controls were free from tuberculosis. A data collection sheet was filled out and a TST was performed for each participant. Diagnostic accuracy measures of TST were estimated using ROC curve and Area Under Curve to estimate sensitivity and specificity of a determined cut-off point. Fagan’s nomogram was used to estimate its predictive values. Results: Overall, 1053 patients were enrolled, composed of 339 cases (sex-ratio (M/F)=0.87) and 714 controls (sex-ratio (M/F)=0.99). The mean age was 38.3±11.8 years for cases and 33.6±11 years for controls. The mean diameter of the TST induration was significantly higher among cases than controls (13.7mm vs.6.2mm;p=10-6). Area Under Curve was 0.789 [95% CI: 0.758-0.819; p=0.01], corresponding to a moderate discriminating power for this test. The most discriminative cut-off value of the TST, which were associated with the best sensitivity (73.7%) and specificity (76.6%) couple was about 11 mm with a Youden index of 0.503. Positive and Negative predictive values were 3.11% and 99.52%, respectively. Conclusion: In view of these results, we can conclude that the TST can be used for tuberculosis diagnosis with a good sensitivity and specificity. However, the skin induration measurement and its interpretation is operator dependent and remains difficult and subjective. The combination of the TST with another test such as the Quantiferon test would be a good alternative.

Keywords: tuberculosis, tuberculin skin test, ROC curve, cut-off

Procedia PDF Downloads 47
247 Normalized Enterprises Architectures: Portugal's Public Procurement System Application

Authors: Tiago Sampaio, André Vasconcelos, Bruno Fragoso

Abstract:

The Normalized Systems Theory, which is designed to be applied to software architectures, provides a set of theorems, elements and rules, with the purpose of enabling evolution in Information Systems, as well as ensuring that they are ready for change. In order to make that possible, this work’s solution is to apply the Normalized Systems Theory to the domain of enterprise architectures, using Archimate. This application is achieved through the adaptation of the elements of this theory, making them artifacts of the modeling language. The theorems are applied through the identification of the viewpoints to be used in the architectures, as well as the transformation of the theory’s encapsulation rules into architectural rules. This way, it is possible to create normalized enterprise architectures, thus fulfilling the needs and requirements of the business. This solution was demonstrated using the Portuguese Public Procurement System. The Portuguese government aims to make this system as fair as possible, allowing every organization to have the same business opportunities. The aim is for every economic operator to have access to all public tenders, which are published in any of the 6 existing platforms, independently of where they are registered. In order to make this possible, we applied our solution to the construction of two different architectures, which are able of fulfilling the requirements of the Portuguese government. One of those architectures, TO-BE A, has a Message Broker that performs the communication between the platforms. The other, TO-BE B, represents the scenario in which the platforms communicate with each other directly. Apart from these 2 architectures, we also represent the AS-IS architecture that demonstrates the current behavior of the Public Procurement Systems. Our evaluation is based on a comparison between the AS-IS and the TO-BE architectures, regarding the fulfillment of the rules and theorems of the Normalized Systems Theory and some quality metrics.

Keywords: archimate, architecture, broker, enterprise, evolvable systems, interoperability, normalized architectures, normalized systems, normalized systems theory, platforms

Procedia PDF Downloads 334
246 Influence of Hearing Aids on Non-Medically Treatable Deafness

Authors: Niragira Donatien

Abstract:

The progress of technology creates new expectations for patients. The world of deafness is no exception. In recent years, there have been considerable advances in the field of technologies aimed at assisting failing hearing. According to the usual medical vocabulary, hearing aids are actually orthotics. They do not replace an organ but compensate for a functional impairment. The amplifier hearing amplification is useful for a large number of people with hearing loss. Hearing aids restore speech audibility. However, their benefits vary depending on the quality of residual hearing. The hearing aid is not a "cure" for deafness. It cannot correct all affected hearing abilities. It should be considered as an aid to communicate who the best candidates for hearing aids are. The urge to judge from the audiogram alone should be resisted here, as audiometry only indicates the ability to detect non-verbal sounds. To prevent hearing aids from ending up in the drawer, it is important to ensure that the patient's disability situations justify the use of this type of orthosis. If the problems of receptive pre-fitting counselling are crucial, the person with hearing loss must be informed of the advantages and disadvantages of amplification in his or her case. Their expectations must be realistic. They also need to be aware that the adaptation process requires a good deal of patience and perseverance. They should be informed about the various models and types of hearing aids, including all the aesthetic, functional, and financial considerations. If the person's motivation "survives" pre-fitting counselling, we are in the presence of a good candidate for amplification. In addition to its relevance, hearing aids raise other questions: Should one or both ears be fitted? In short, all these questions show that the results found in this study significantly improve the quality of audibility in the patient, from where this technology must be made accessible everywhere in the world. So we want to progress with the technology.

Keywords: audiology, influence, hearing, madicaly, treatable

Procedia PDF Downloads 32
245 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach

Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma

Abstract:

Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.

Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX

Procedia PDF Downloads 108
244 Kinetic Study of Physical Quality Changes on Jumbo Squid (Dosidicus gigas) Slices during Application High-Pressure Impregnation

Authors: Mario Perez-Won, Roberto Lemus-Mondaca, Fernanda Marin, Constanza Olivares

Abstract:

This study presents the simultaneous application of high hydrostatic pressure (HHP) and osmotic dehydration of jumbo squid (Dosidicus gigas) slice. Diffusion coefficients for both components water and solids were improved by the process pressure, being influenced by pressure level. The working conditions were different pressures such as 100, 250, 400 MPa and pressure atmospheric (0.1 MPa) for time intervals from 30 to 300 seconds and a 15% NaCl concentration. The mathematical expressions used for mass transfer simulations both water and salt were those corresponding to Newton, Henderson and Pabis, Page and Weibull models, where the Weibull and Henderson-Pabis models presented the best fitted to the water and salt experimental data, respectively. The values for water diffusivity coefficients varied from 1.62 to 8.10x10⁻⁹ m²/s whereas that for salt varied among 14.18 to 36.07x10⁻⁹ m²/s for selected conditions. Finally, as to quality parameters studied under the range of experimental conditions studied, the treatment at 250 MPa yielded on the samples a minimum hardness, whereas springiness, cohesiveness and chewiness at 100, 250 and 400 MPa treatments presented statistical differences regarding to unpressurized samples. The colour parameters L* (lightness) increased, however, but b* (yellowish) and a* (reddish) parameters decreased when increasing pressure level. This way, samples presented a brighter aspect and a mildly cooked appearance. The results presented in this study can support the enormous potential of hydrostatic pressure application as a technique important for compounds impregnation under high pressure.

Keywords: colour, diffusivity, high pressure, jumbo squid, modelling, texture

Procedia PDF Downloads 320
243 Predictive Analytics of Bike Sharing Rider Parameters

Authors: Bongs Lainjo

Abstract:

The evolution and escalation of bike-sharing programs (BSP) continue unabated. Since the sixties, many countries have introduced different models and strategies of BSP. These include variations ranging from dockless models to electronic real-time monitoring systems. Reasons for using this BSP include recreation, errands, work, etc. And there is all indication that complex, and more innovative rider-friendly systems are yet to be introduced. The objective of this paper is to analyze current variables established by different operators and streamline them identifying the most compelling ones using analytics. Given the contents of available databases, there is a lack of uniformity and common standard on what is required and what is not. Two factors appear to be common: user type (registered and unregistered, and duration of each trip). This article uses historical data provided by one operator based in the greater Washington, District of Columbia, USA area. Several variables including categorical and continuous data types were screened. Eight out of 18 were considered acceptable and significantly contribute to determining a useful and reliable predictive model. Bike-sharing systems have become popular in recent years all around the world. Although this trend has resulted in many studies on public cycling systems, there have been few previous studies on the factors influencing public bicycle travel behavior. A bike-sharing system is a computer-controlled system in which individuals can borrow bikes for a fee or free for a limited period. This study has identified unprecedented useful, and pragmatic parameters required in improving BSP ridership dynamics.

Keywords: sharing program, historical data, parameters, ridership dynamics, trip duration

Procedia PDF Downloads 114
242 Return on Investment of a VFD Drive for Centrifugal Pump

Authors: Benhaddadi M., Déry D.

Abstract:

Electric motors are the single biggest consumer of electricity, and the consumption will have more than to double by 2050. Meanwhile, the existing technologies offer the potential to reduce the motor energy demand by up to 30 %, whereas the know-how to realise energy savings is not extensively applied. That is why the authors first conducted a detailed analysis of the regulation of the electric motor market in North America To illustrate the colossal energy savings potential permitted by the VFD, the authors have equipped experimental setup, based on centrifugal pump, simultaneously equipped with regulating throttle valves and variable frequency drive VFD. The obtained experimental results for 1.5 HP motor pump are extended to another motor powers, as centrifugal pumps that are different in power may have similar operational characteristics if they are located in a similar kind of process, permitting the simulations for 5 HP and 100 HP motors. According to the obtained results, VFDs tend to be most cost-effective when fitted to larger motor pumps, in addition to higher duty cycle of the motor and relative time operating at lower than full load. The energy saving permitted by the VFD use is huge, and the payback period for drive investment is short. Nonetheless, it’s important to highlight that there is no general rule of thumb that can be used to obtain the impact of the relative time operating at lower than full load. Indeed, in terms of energy-saving differences, 50 % flow regulation is tremendously better than 75 % regulation, but a slightly enhanced relative to 25 %. Two main distinct reasons can explain this somewhat not anticipated results: the characteristics of the process and the drop in efficiency when motor is operating at low speed.

Keywords: motor, drive, energy efficiency, centrifugal pump

Procedia PDF Downloads 49
241 Using Deep Learning for the Detection of Faulty RJ45 Connectors on a Radio Base Station

Authors: Djamel Fawzi Hadj Sadok, Marrone Silvério Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner

Abstract:

A radio base station (RBS), part of the radio access network, is a particular type of equipment that supports the connection between a wide range of cellular user devices and an operator network access infrastructure. Nowadays, most of the RBS maintenance is carried out manually, resulting in a time consuming and costly task. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. This paper proposes and compares two deep learning solutions to identify attached RJ45 connectors on network ports. We named connector detection, the solution based on object detection, and connector classification, the one based on object classification. With the connector detection, we get an accuracy of 0:934, mean average precision 0:903. Connector classification, get a maximum accuracy of 0:981 and an AUC of 0:989. Although connector detection was outperformed in this study, this should not be viewed as an overall result as connector detection is more flexible for scenarios where there is no precise information about the environment and the possible devices. At the same time, the connector classification requires that information to be well-defined.

Keywords: radio base station, maintenance, classification, detection, deep learning, automation

Procedia PDF Downloads 174
240 Estimation of Particle Size Distribution Using Magnetization Data

Authors: Navneet Kaur, S. D. Tiwari

Abstract:

Magnetic nanoparticles possess fascinating properties which make their behavior unique in comparison to corresponding bulk materials. Superparamagnetism is one such interesting phenomenon exhibited only by small particles of magnetic materials. In this state, the thermal energy of particles become more than their magnetic anisotropy energy, and so particle magnetic moment vectors fluctuate between states of minimum energy. This situation is similar to paramagnetism of non-interacting ions and termed as superparamagnetism. The magnetization of such systems has been described by Langevin function. But, the estimated fit parameters, in this case, are found to be unphysical. It is due to non-consideration of particle size distribution. In this work, analysis of magnetization data on NiO nanoparticles is presented considering the effect of particle size distribution. Nanoparticles of NiO of two different sizes are prepared by heating freshly synthesized Ni(OH)₂ at different temperatures. Room temperature X-ray diffraction patterns confirm the formation of single phase of NiO. The diffraction lines are seen to be quite broad indicating the nanocrystalline nature of the samples. The average crystallite size are estimated to be about 6 and 8 nm. The samples are also characterized by transmission electron microscope. Magnetization of both sample is measured as function of temperature and applied magnetic field. Zero field cooled and field cooled magnetization are measured as a function of temperature to determine the bifurcation temperature. The magnetization is also measured at several temperatures in superparamagnetic region. The data are fitted to an appropriate expression considering a distribution in particle size following a least square fit procedure. The computer codes are written in PYTHON. The presented analysis is found to be very useful for estimating the particle size distribution present in the samples. The estimated distributions are compared with those determined from transmission electron micrographs.

Keywords: anisotropy, magnetization, nanoparticles, superparamagnetism

Procedia PDF Downloads 109
239 Generation of Renewable Energy Through Photovoltaic Panels, Albania Photovoltaic Capacity

Authors: Dylber Qema

Abstract:

Driven by recent developments in technology and the growing concern about the sustainability and environmental impact of conventional fuel use, the possibility of producing clean and sustainable energy in significant quantities from renewable energy sources has sparked interest all over the world. Solar energy is one of the sources for the generation of electricity, with no emissions or environmental pollution. The electricity produced by photovoltaics can supply a home or business and can even be sold or exchanged with the grid operator. A very positive effect of using photovoltaic modules is that they do not produce greenhouse gases and do not produce chemical waste, unlike all other forms of energy production. Photovoltaics are becoming one of the largest investments in the field of renewable generating units. Improving the reliability of the electric power system is one of the most important impacts of the installation of photovoltaics (PV). Renewable energy sources are so large that they can meet the energy demands of the whole world, thus enabling sustainable supply as well as reducing local and global atmospheric emissions. Albania is rated by experts as one of the most favorable countries in Europe for the production of electricity from solar panels. But the country currently produces about 1% of its energy from the sun, while the rest of the needs are met by hydropower plants and imports. Albania has very good characteristics in terms of solar radiation (about 1300–1400 kW/m2). Solar energy has great potential and is a permanent source of energy with greater economic efficiency. Photovoltaic energy is also seen as an alternative, as long periods of drought in Albania have produced crises and high costs for securing energy in the foreign market.

Keywords: capacity, ministry of tourism and environment, obstacles, photovoltaic energy, sustainable

Procedia PDF Downloads 35
238 A Sense of Home: Study of Walk-up Apartment Housing Units In Yangon, Myanmar

Authors: Phyo Kyaw Kyaw

Abstract:

In the Yangon urban landscape, one could not help, but notice old buildings from the colonial period along with condominium developments recently, and many walk-up apartment buildings to accommodate the urbanization, growing population and social-economic status of Myanmar people. Walk-up apartments were built and popular after the British colonial period (around 1950s) and are still built up to today due to its cost-effectiveness and to accommodate low to mid-income residents in the metropolitan Yangon. Approximately 90% of apartment buildings are walk-up apartments. The common impression of walk-up apartments in Yangon appears to be old rectangular box shape, homogenous envelope and limited square feet dull interior small space. In other words, the buildings are full of constraints, lack of good user experiences, and they are not well-fitted in the modern days. Therefore, the resident suffers consequently many years, some may live in the apartment their entire lives. Thousands of people living in the walk-up apartment on a daily basis are being shaped by the space and its inadequate quality of living. Can it be called “Home” by the dwellers or is the place a temporary shelter?. Online semi-structured interviews of 15 apartments’ residents and online questionnaire surveys of 70 apartment residents are conducted. This research aims to explore what makes “Home” “A sense of Home” for walk-up apartment users in Yangon, Myanmar by studying subjective responses shaped by the interior and experience of the spaces in apartment to understand the perception of the residents and improve the quality of living. The result reflects the priority level of important factors in relation to the sense of home framework.

Keywords: home, living quality, space, perception, residents, walk-up apartment, Yangon

Procedia PDF Downloads 76
237 Extended Intuitionistic Fuzzy VIKOR Method in Group Decision Making: The Case of Vendor Selection Decision

Authors: Nastaran Hajiheydari, Mohammad Soltani Delgosha

Abstract:

Vendor (supplier) selection is a group decision-making (GDM) process, in which, based on some predetermined criteria, the experts’ preferences are provided in order to rank and choose the most desirable suppliers. In the real business environment, our attitudes or our choices would be made in an uncertain and indecisive situation could not be expressed in a crisp framework. Intuitionistic fuzzy sets (IFSs) could handle such situations in the best way. VIKOR method was developed to solve multi-criteria decision-making (MCDM) problems. This method, which is used to determine the compromised feasible solution with respect to the conflicting criteria, introduces a multi-criteria ranking index based on the particular measure of 'closeness' to the 'ideal solution'. Until now, there has been a little investigation of VIKOR with IFS, therefore we extended the intuitionistic fuzzy (IF) VIKOR to solve vendor selection problem under IF GDM environment. The present study intends to develop an IF VIKOR method in a GDM situation. Therefore, a model is presented to calculate the criterion weights based on entropy measure. Then, the interval-valued intuitionistic fuzzy weighted geometric (IFWG) operator utilized to obtain the total decision matrix. In the next stage, an approach based on the positive idle intuitionistic fuzzy number (PIIFN) and negative idle intuitionistic fuzzy number (NIIFN) was developed. Finally, the application of the proposed method to solve a vendor selection problem illustrated.

Keywords: group decision making, intuitionistic fuzzy set, intuitionistic fuzzy entropy measure, vendor selection, VIKOR

Procedia PDF Downloads 129
236 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques

Authors: Jonathan Iworiso

Abstract:

Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.

Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains

Procedia PDF Downloads 77
235 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data

Authors: Georgiana Onicescu, Yuqian Shen

Abstract:

Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.

Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection

Procedia PDF Downloads 112
234 Assessing and Identifying Factors Affecting Customers Satisfaction of Commercial Bank of Ethiopia: The Case of West Shoa Zone (Bako, Gedo, Ambo, Ginchi and Holeta), Ethiopia

Authors: Habte Tadesse Likassa, Bacha Edosa

Abstract:

Customer’s satisfaction was very important thing that is required for the existence of banks to be more productive and success in any organization and business area. The main goal of the study is assessing and identifying factors that influence customer’s satisfaction in West Shoa Zone of Commercial Bank of Ethiopia (Holeta, Ginchi, Ambo, Gedo and Bako). Stratified random sampling procedure was used in the study and by using simple random sampling (lottery method) 520 customers were drawn from the target population. By using Probability Proportional Size Techniques sample size for each branch of banks were allocated. Both descriptive and inferential statistics methods were used in the study. A binary logistic regression model was fitted to see the significance of factors affecting customer’s satisfaction in this study. SPSS statistical package was used for data analysis. The result of the study reveals that the overall level of customer’s satisfaction in the study area is low (38.85%) as compared those who were not satisfied (61.15%). The result of study showed that all most all factors included in the study were significantly associated with customer’s satisfaction. Therefore, it can be concluded that based on the comparison of branches on their customers satisfaction by using odd ratio customers who were using Ambo and Bako are less satisfied as compared to customers who were in Holeta branch. Additionally, customers who were in Ginchi and Gedo were more satisfied than that of customers who were in Holeta. Since the level of customers satisfaction was low in the study area, it is more advisable and recommended for concerned body works cooperatively more in maximizing satisfaction of their customers.

Keywords: customers, satisfaction, binary logistic, complain handling process, waiting time

Procedia PDF Downloads 437
233 Establishment of a Nomogram Prediction Model for Postpartum Hemorrhage during Vaginal Delivery

Authors: Yinglisong, Jingge Chen, Jingxuan Chen, Yan Wang, Hui Huang, Jing Zhnag, Qianqian Zhang, Zhenzhen Zhang, Ji Zhang

Abstract:

Purpose: The study aims to establish a nomogram prediction model for postpartum hemorrhage (PPH) in vaginal delivery. Patients and Methods: Clinical data were retrospectively collected from vaginal delivery patients admitted to a hospital in Zhengzhou, China, from June 1, 2022 - October 31, 2022. Univariate and multivariate logistic regression were used to filter out independent risk factors. A nomogram model was established for PPH in vaginal delivery based on the risk factors coefficient. Bootstrapping was used for internal validation. To assess discrimination and calibration, receiver operator characteristics (ROC) and calibration curves were generated in the derivation and validation groups. Results: A total of 1340 cases of vaginal delivery were enrolled, with 81 (6.04%) having PPH. Logistic regression indicated that history of uterine surgery, induction of labor, duration of first labor, neonatal weight, WBC value (during the first stage of labor), and cervical lacerations were all independent risk factors of hemorrhage (P <0.05). The area-under-curve (AUC) of ROC curves of the derivation group and the validation group were 0.817 and 0.821, respectively, indicating good discrimination. Two calibration curves showed that nomogram prediction and practical results were highly consistent (P = 0.105, P = 0.113). Conclusion: The developed individualized risk prediction nomogram model can assist midwives in recognizing and diagnosing high-risk groups of PPH and initiating early warning to reduce PPH incidence.

Keywords: vaginal delivery, postpartum hemorrhage, risk factor, nomogram

Procedia PDF Downloads 46
232 A Simple Model for Solar Panel Efficiency

Authors: Stefano M. Spagocci

Abstract:

The efficiency of photovoltaic panels can be calculated with such software packages as RETScreen that allow design engineers to take financial as well as technical considerations into account. RETScreen is interfaced with meteorological databases, so that efficiency calculations can be realistically carried out. The author has recently contributed to the development of solar modules with accumulation capability and an embedded water purifier, aimed at off-grid users such as users in developing countries. The software packages examined do not allow to take ancillary equipment into account, hence the decision to implement a technical and financial model of the system. The author realized that, rather than re-implementing the quite sophisticated model of RETScreen - a mathematical description of which is anyway not publicly available - it was possible to drastically simplify it, including the meteorological factors which, in RETScreen, are presented in a numerical form. The day-by-day efficiency of a photovoltaic solar panel was parametrized by the product of factors expressing, respectively, daytime duration, solar right ascension motion, solar declination motion, cloudiness, temperature. For the sun-motion-dependent factors, positional astronomy formulae, simplified by the author, were employed. Meteorology-dependent factors were fitted by simple trigonometric functions, employing numerical data supplied by RETScreen. The accuracy of our model was tested by comparing it to the predictions of RETScreen; the accuracy obtained was 11%. In conclusion, our study resulted in a model that can be easily implemented in a spreadsheet - thus being easily manageable by non-specialist personnel - or in more sophisticated software packages. The model was used in a number of design exercises, concerning photovoltaic solar panels and ancillary equipment like the above-mentioned water purifier.

Keywords: clean energy, energy engineering, mathematical modelling, photovoltaic panels, solar energy

Procedia PDF Downloads 30
231 A Spatial Information Network Traffic Prediction Method Based on Hybrid Model

Authors: Jingling Li, Yi Zhang, Wei Liang, Tao Cui, Jun Li

Abstract:

Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.

Keywords: spatial information network, traffic prediction, wavelet decomposition, time series model

Procedia PDF Downloads 119
230 Statistical Analysis and Optimization of a Process for CO2 Capture

Authors: Muftah H. El-Naas, Ameera F. Mohammad, Mabruk I. Suleiman, Mohamed Al Musharfy, Ali H. Al-Marzouqi

Abstract:

CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.

Keywords: CO2 capture, water desalination, Response Surface Methodology, bubble column reactor

Procedia PDF Downloads 266
229 Amino Acid Responses of Wheat Cultivars under Glasshouse Drought Accurately Predict Yield-Based Drought Tolerance in the Field

Authors: Arun K. Yadav, Adam J. Carroll, Gonzalo M. Estavillo, Greg J. Rebetzke, Barry J. Pogson

Abstract:

Water limits crop productivity, so selecting for minimal yield-gap in drier environments is critical to mitigate against climate change and land-use pressures. To date, no markers measured in glasshouses have been reported to predict field-based drought tolerance. In the field, the best measure of drought tolerance is yield-gap; but this requires multisite trials that are an order of magnitude more resource intensive and can be impacted by weather variation. We investigated the responses of relative water content (RWC), stomatal conductance (gs), chlorophyll content and metabolites in flag leaves of commercial wheat (Triticum aestivum L.) cultivars to three drought treatments in the glasshouse and field environments. We observed strong genetic associations between glasshouse-based RWC, metabolites and Yield gap-based Drought Tolerance (YDT): the ratio of yield in water-limited versus well-watered conditions across 24 field environments spanning sites and seasons. Critically, RWC response to glasshouse drought was strongly associated with both YDT (r2 = 0.85, p < 8E-6) and RWC under field drought (r2 = 0.77, p < 0.05). Multiple regression analyses revealed that 98% of genetic YDT variance was explained by drought responses of four metabolites: serine, asparagine, methionine and lysine (R2 = 0.98; p < 0.01). Fitted coefficients suggested that, for given levels of serine and asparagine, stronger methionine and lysine accumulation was associated with higher YDT. Collectively, our results demonstrate that high-throughput, targeted metabolic phenotyping of glasshouse-grown plants may be an effective tool for the selection of wheat cultivars with high YDT in the field.

Keywords: drought stress, grain yield, metabolomics, stomatal conductance, wheat

Procedia PDF Downloads 243
228 Maximizing Profit Using Optimal Control by Exploiting the Flexibility in Thermal Power Plants

Authors: Daud Mustafa Minhas, Raja Rehan Khalid, Georg Frey

Abstract:

The next generation power systems are equipped with abundantly available free renewable energy resources (RES). During their low-cost operations, the price of electricity significantly reduces to a lower value, and sometimes it becomes negative. Therefore, it is recommended not to operate the traditional power plants (e.g. coal power plants) and to reduce the losses. In fact, it is not a cost-effective solution, because these power plants exhibit some shutdown and startup costs. Moreover, they require certain time for shutdown and also need enough pause before starting up again, increasing inefficiency in the whole power network. Hence, there is always a trade-off between avoiding negative electricity prices, and the startup costs of power plants. To exploit this trade-off and to increase the profit of a power plant, two main contributions are made: 1) introducing retrofit technology for state of art coal power plant; 2) proposing optimal control strategy for a power plant by exploiting different flexibility features. These flexibility features include: improving ramp rate of power plant, reducing startup time and lowering minimum load. While, the control strategy is solved as mixed integer linear programming (MILP), ensuring optimal solution for the profit maximization problem. Extensive comparisons are made considering pre and post-retrofit coal power plant having the same efficiencies under different electricity price scenarios. It concludes that if the power plant must remain in the market (providing services), more flexibility reflects direct economic advantage to the plant operator.

Keywords: discrete optimization, power plant flexibility, profit maximization, unit commitment model

Procedia PDF Downloads 120
227 Robotic Assisted vs Traditional Laparoscopic Partial Nephrectomy Peri-Operative Outcomes: A Comparative Single Surgeon Study

Authors: Gerard Bray, Derek Mao, Arya Bahadori, Sachinka Ranasinghe

Abstract:

The EAU currently recommends partial nephrectomy as the preferred management for localised cT1 renal tumours, irrespective of surgical approach. With the advent of robotic assisted partial nephrectomy, there is growing evidence that warm ischaemia time may be reduced compared to the traditional laparoscopic approach. There is still no clear differences between the two approaches with regards to other peri-operative and oncological outcomes. Current limitations in the field denote the lack of single surgeon series to compare the two approaches as other studies often include multiple operators of different experience levels. To the best of our knowledge, this study is the first single surgeon series comparing peri-operative outcomes of robotic assisted and laparoscopic PN. The current study aims to reduce intra-operator bias while maintaining an adequate sample size to assess the differences in outcomes between the two approaches. We retrospectively compared patient demographics, peri-operative outcomes, and renal function derangements of all partial nephrectomies undertaken by a single surgeon with experience in both laparoscopic and robotic surgery. Warm ischaemia time, length of stay, and acute renal function deterioration were all significantly reduced with robotic partial nephrectomy, compared to laparoscopic nephrectomy. This study highlights the benefits of robotic partial nephrectomy. Further prospective studies with larger sample sizes would be valuable additions to the current literature.

Keywords: partial nephrectomy, robotic assisted partial nephrectomy, warm ischaemia time, peri-operative outcomes

Procedia PDF Downloads 120
226 Liquid Tin(II) Alkoxide Initiators for Use in the Ring-Opening Polymerisation of Cyclic Ester Monomers

Authors: Sujitra Ruengdechawiwat, Robert Molloy, Jintana Siripitayananon, Runglawan Somsunan, Paul D. Topham, Brian J. Tighe

Abstract:

The main aim of this research has been to design and synthesize some completely soluble liquid tin(II) alkoxide initiators for use in the ring-opening polymerisation (ROP) of cyclic ester monomers. This is in contrast to conventional tin(II) alkoxides in solid form which tend to be molecular aggregates and difficult to dissolve. The liquid initiators prepared were bis(tin(II) monooctoate) diethylene glycol ([Sn(Oct)]2DEG) and bis(tin(II) monooctoate) ethylene glycol ([Sn(Oct)]2EG). Their efficiencies as initiators in the bulk ROP of ε-caprolactone (CL) at 130oC were studied kinetically by dilatometry. Kinetic data over the 20-70% conversion range was used to construct both first-order and zero-order rate plots. It was found that the rate data fitted more closely to first-order kinetics with respect to the monomer concentration and gave higher first-order rate constants than the corresponding tin(II) octoate/diol initiating systems normally used to generate the tin(II) alkoxide in situ. Since the ultimate objective of this work is to produce copolymers suitable for biomedical use as absorbable monofilament surgical sutures, poly(L-lactide-co-ε-caprolactone) 75:25 mol %, P(LL-co-CL), copolymers were synthesized using both solid and liquid tin(II) alkoxide initiators at 130°C for 48 hrs. The statistical copolymers were obtained in near-quantitative yields with compositions (from 1H-NMR) close to the initial comonomer feed ratios. The monomer sequencing (from 13C-NMR) was partly random and partly blocky (gradient-type) due to the much differing monomer reactivity ratios (rLL >> rCL). From GPC, the copolymers obtained using the soluble liquid tin(II) alkoxides were found to have higher molecular weights (Mn = 40,000-100,000) than those from the only partially soluble solid initiators (Mn = 30,000-52,000).

Keywords: biodegradable polyesters, poly(L-lactide-co-ε-caprolactone), ring-opening polymerisation, tin(II) alkoxide

Procedia PDF Downloads 176
225 Consequences of Some Remediative Techniques Used in Sewaged Soil Bioremediation on Indigenous Microbial Activity

Authors: E. M. Hoballah, M. Saber, A. Turky, N. Awad, A. M. Zaghloul

Abstract:

Remediation of cultivated sewage soils in Egypt become an important aspect in last decade for having healthy crops and saving the human health. In this respect, a greenhouse experiment was conducted where contaminated sewage soil was treated with modified forms of 2% bentonite (T1), 2% kaolinite (T2), 1% bentonite+1% kaolinite (T3), 2% probentonite (T4), 2% prokaolinite (T5), 1% bentonite + 0.5% kaolinite + 0.5% rock phosphate (RP) (T6), 2% iron oxide (T7) and 1% iron oxide + 1% RP (T8). These materials were applied as remediative materials. Untreated soil was also used as a control. All soil samples were incubated for 2 months at 25°C at field capacity throughout the whole experiment. Carbon dioxide (CO2) efflux from both treated and untreated soils as a biomass indicator was measured through the incubation time and kinetic parameters of the best fitted models used to describe the phenomena were taken to evaluate the succession of sewaged soils remediation. The obtained results indicated that according to the kinetic parameters of used models, CO2 effluxes from remediated soils was significantly decreased compared to control treatment with variation in rate values according to type of remediation material applied. In addition, analyzed microbial biomass parameter showed that Ni and Zn were the most potential toxic elements (PTEs) that influenced the decreasing order of microbial activity in untreated soil. Meanwhile, Ni was the only influenced pollutant in treated soils. Although all applied materials significantly decreased the hazards of PTEs in treated soil, modified bentonite was the best treatment compared to other used materials. This work discussed different mechanisms taking place between applied materials and PTEs founded in the studied sewage soil.

Keywords: remediation, potential toxic elements, soil biomass, sewage

Procedia PDF Downloads 207
224 Radiation Protection Assessment of the Emission of a d-t Neutron Generator: Simulations with MCNP Code and Experimental Measurements in Different Operating Conditions

Authors: G. M. Contessa, L. Lepore, G. Gandolfo, C. Poggi, N. Cherubini, R. Remetti, S. Sandri

Abstract:

Practical guidelines are provided in this work for the safe use of a portable d-t Thermo Scientific MP-320 neutron generator producing pulsed 14.1 MeV neutron beams. The neutron generator’s emission was tested experimentally and reproduced by MCNPX Monte Carlo code. Simulations were particularly accurate, even generator’s internal components were reproduced on the basis of ad-hoc collected X-ray radiographic images. Measurement campaigns were conducted under different standard experimental conditions using an LB 6411 neutron detector properly calibrated at three different energies, and comparing simulated and experimental data. In order to estimate the dose to the operator vs. the operating conditions and the energy spectrum, the most appropriate value of the conversion factor between neutron fluence and ambient dose equivalent has been identified, taking into account both direct and scattered components. The results of the simulations show that, in real situations, when there is no information about the neutron spectrum at the point where the dose has to be evaluated, it is possible - and in any case conservative - to convert the measured value of the count rate by means of the conversion factor corresponding to 14 MeV energy. This outcome has a general value when using this type of generator, enabling a more accurate design of experimental activities in different setups. The increasingly widespread use of this type of device for industrial and medical applications makes the results of this work of interest in different situations, especially as a support for the definition of appropriate radiation protection procedures and, in general, for risk analysis.

Keywords: instrumentation and monitoring, management of radiological safety, measurement of individual dose, radiation protection of workers

Procedia PDF Downloads 111
223 Sustainable Approach to Fabricate Titanium Nitride Film on Steel Substrate by Using Automotive Plastics Waste

Authors: Songyan Yin, Ravindra Rajarao, Veena Sahajwalla

Abstract:

Automotive plastics waste (widely known as auto-fluff or ASR) is a complicated mixture of various plastics incorporated with a wide range of additives and fillers like titanium dioxide, magnesium oxide, and silicon dioxide. Automotive plastics waste is difficult to recycle and its landfilling poses the significant threat to the environment. In this study, a sustainable technology to fabricate protective nanoscale TiN thin film on a steel substrate surface by using automotive waste plastics as titanium and carbon resources is suggested. When heated automotive plastics waste with steel at elevated temperature in a nitrogen atmosphere, titanium dioxide contented in ASR undergo carbothermal reduction and nitridation reactions on the surface of the steel substrate forming a nanoscale thin film of titanium nitride on the steel surface. The synthesis of TiN film on steel substrate under this technology was confirmed by X-ray photoelectron spectrometer, high resolution X-ray diffraction, field emission scanning electron microscope, a high resolution transmission electron microscope fitted with energy dispersive X-ray spectroscopy, and inductively coupled plasma mass spectrometry techniques. This sustainably fabricated TiN film was verified of dense, well crystallized and could provide good oxidation resistance to the steel substrate. This sustainable fabrication technology is maneuverable, reproducible and of great economic and environmental benefit. It not only reduces the fabrication cost of TiN coating on steel surface, but also provides a sustainable environmental solution to recycling automotive plastics waste. Moreover, high value copper droplets and char residues were also extracted from this unique fabrication process.

Keywords: automotive plastics waste, carbonthermal reduction and nitirdation, sustainable, TiN film

Procedia PDF Downloads 367
222 Generalized Additive Model for Estimating Propensity Score

Authors: Tahmidul Islam

Abstract:

Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.

Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching

Procedia PDF Downloads 339
221 Intelligent Control of Bioprocesses: A Software Application

Authors: Mihai Caramihai, Dan Vasilescu

Abstract:

The main research objective of the experimental bioprocess analyzed in this paper was to obtain large biomass quantities. The bioprocess is performed in 100 L Bioengineering bioreactor with 42 L cultivation medium made of peptone, meat extract and sodium chloride. The reactor was equipped with pH, temperature, dissolved oxygen, and agitation controllers. The operating parameters were 37 oC, 1.2 atm, 250 rpm and air flow rate of 15 L/min. The main objective of this paper is to present a case study to demonstrate that intelligent control, describing the complexity of the biological process in a qualitative and subjective manner as perceived by human operator, is an efficient control strategy for this kind of bioprocesses. In order to simulate the bioprocess evolution, an intelligent control structure, based on fuzzy logic has been designed. The specific objective is to present a fuzzy control approach, based on human expert’ rules vs. a modeling approach of the cells growth based on bioprocess experimental data. The kinetic modeling may represent only a small number of bioprocesses for overall biosystem behavior while fuzzy control system (FCS) can manipulate incomplete and uncertain information about the process assuring high control performance and provides an alternative solution to non-linear control as it is closer to the real world. Due to the high degree of non-linearity and time variance of bioprocesses, the need of control mechanism arises. BIOSIM, an original developed software package, implements such a control structure. The simulation study has showed that the fuzzy technique is quite appropriate for this non-linear, time-varying system vs. the classical control method based on a priori model.

Keywords: intelligent, control, fuzzy model, bioprocess optimization

Procedia PDF Downloads 294