Search results for: operational error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3157

Search results for: operational error

487 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action

Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere

Abstract:

Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.

Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results

Procedia PDF Downloads 113
486 Nursing Professionals’ Perception of the Work Environment, Safety Climate and Job Satisfaction in the Brazilian Hospitals during the COVID-19 Pandemic

Authors: Ana Claudia de Souza Costa, Beatriz de Cássia Pinheiro Goulart, Karine de Cássia Cavalari, Henrique Ceretta Oliveira, Edineis de Brito Guirardello

Abstract:

Background: During the COVID-19 pandemic, nursing represents the largest category of health professionals who were on the front line. Thus, investigating the practice environment and the job satisfaction of nursing professionals during the pandemic becomes fundamental since it reflects on the quality of care and the safety climate. The aim of this study was to evaluate and compare the nursing professionals' perception of the work environment, job satisfaction, and safety climate of the different hospitals and work shifts during the COVID-19 pandemic. Method: This is a cross-sectional survey with 130 nursing professionals from public, private and mixed hospitals in Brazil. For data collection, was used an electronic form containing the personal and occupational variables, work environment, job satisfaction, and safety climate. The data were analyzed using descriptive statistics and ANOVA or Kruskal-Wallis tests according to the data distribution. The distribution was evaluated by means of the Shapiro-Wilk test. The analysis was done in the SPSS 23 software, and it was considered a significance level of 5%. Results: The mean age of the participants was 35 years (±9.8), with a mean time of 6.4 years (±6.7) of working experience in the institution. Overall, the nursing professionals evaluated the work environment as favorable; they were dissatisfied with their job in terms of pay, promotion, benefits, contingent rewards, operating procedures and satisfied with coworkers, nature of work, supervision, and communication, and had a negative perception of the safety climate. When comparing the hospitals, it was found that they did not differ in their perception of the work environment and safety climate. However, they differed with regard to job satisfaction, demonstrating that nursing professionals from public hospitals were more dissatisfied with their work with regard to promotion when compared to professionals from private (p=0.02) and mixed hospitals (p< 0.01) and nursing professionals from mixed hospitals were more satisfied than those from private hospitals (p= 0.04) with regard to supervision. Participants working in night shifts had the worst perception of the work environment related to nurse participation in hospital affairs (p= 0.02), nursing foundations for quality care (p= 0.01), nurse manager ability, leadership and support (p= 0.02), safety climate (p< 0.01), job satisfaction related to contingent rewards (p= 0.04), nature of work (p= 0.03) and supervision (p< 0.01). Conclusion: The nursing professionals had a favorable perception of the environment and safety climate but differed among hospitals regarding job satisfaction for the promotion and supervision domains. There was also a difference between the participants regarding the work shifts, being the night shifts, those with the lowest scores, except for satisfaction with operational conditions.

Keywords: health facility environment, job satisfaction, patient safety, nursing

Procedia PDF Downloads 134
485 The Effect of Acute Consumption of a Nutritional Supplement Derived from Vegetable Extracts Rich in Nitrate on Athletic Performance

Authors: Giannis Arnaoutis, Dimitra Efthymiopoulou, Maria-Foivi Nikolopoulou, Yannis Manios

Abstract:

AIM: Nitrate-containing supplements have been used extensively as ergogenic in many sports. However, extract fractions from plant-based nutritional sources high in nitrate and their effect on athletic performance, has not been systematically investigated. The purpose of the present study was to examine the possible effect of acute consumption of a “smart mixture” from beetroot and rocket on exercise capacity. MATERIAL & METHODS: 12 healthy, nonsmoking, recreationally active, males (age: 25±4 years, % fat: 15.5±5.7, Fat Free Mass: 65.8±5.6 kg, VO2 max: 45.46.1 mL . kg -1 . min -1) participated in a double-blind, placebo-controlled trial study, in a randomized and counterbalanced order. Eligibility criteria for participation in this study included normal physical examination, and absence of any metabolic, cardiovascular, or renal disease. All participants completed a time to exhaustion cycling test at 75% of their maximum power output, twice. The subjects consumed either capsules containing 360 mg of nitrate in total or placebo capsules, in the morning, under fasted state. After 3h of passive recovery the performance test followed. Blood samples were collected upon arrival of the participants and 3 hours after the consumption of the corresponding capsules. Time until exhaustion, pre- and post-test lactate concentrations, and rate of perceived exertion for the same time points were assessed. RESULTS: Paired-sample t-test analysis found a significant difference in time to exhaustion between the trial with the nitrate consumption versus placebo [16.1±3.0 Vs 13.5±2.6 min, p=0.04] respectively. No significant differences were observed for the concentrations of lactic acid as well as for the values in the Borg scale between the two trials (p>0.05). CONCLUSIONS: Based on the results of the present study, it appears that a nutritional supplement derived from vegetable extracts rich in nitrate, improves athletic performance in recreationally active young males. However, the precise mechanism is not clear and future studies are needed. Acknowledgment: This research has been co‐financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH – CREATE – INNOVATE (project code:T2EDK-00843).

Keywords: sports performance, ergogenic supplements, nitrate, extract fractions

Procedia PDF Downloads 48
484 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory

Authors: Kiana Zeighami, Morteza Ozlati Moghadam

Abstract:

Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.

Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping

Procedia PDF Downloads 190
483 Novel Hole-Bar Standard Design and Inter-Comparison for Geometric Errors Identification on Machine-Tool

Authors: F. Viprey, H. Nouira, S. Lavernhe, C. Tournier

Abstract:

Manufacturing of freeform parts may be achieved on 5-axis machine tools currently considered as a common means of production. In particular, the geometrical quality of the freeform parts depends on the accuracy of the multi-axis structural loop, which is composed of several component assemblies maintaining the relative positioning between the tool and the workpiece. Therefore, to reach high quality of the geometries of the freeform parts the geometric errors of the 5 axis machine should be evaluated and compensated, which leads one to master the deviations between the tool and the workpiece (volumetric accuracy). In this study, a novel hole-bar design was developed and used for the characterization of the geometric errors of a RRTTT 5-axis machine tool. The hole-bar standard design is made of Invar material, selected since it is less sensitive to thermal drift. The proposed design allows once to extract 3 intrinsic parameters: one linear positioning and two straightnesses. These parameters can be obtained by measuring the cylindricity of 12 holes (bores) and 11 cylinders located on a perpendicular plane. By mathematical analysis, twelve 3D points coordinates can be identified and correspond to the intersection of each hole axis with the least square plane passing through two perpendicular neighbour cylinders axes. The hole-bar was calibrated using a precision CMM at LNE traceable the SI meter definition. The reversal technique was applied in order to separate the error forms of the hole bar from the motion errors of the mechanical guiding systems. An inter-comparison was additionally conducted between four NMIs (National Metrology Institutes) within the EMRP IND62: JRP-TIM project. Afterwards, the hole-bar was integrated in RRTTT 5-axis machine tool to identify its volumetric errors. Measurements were carried out in real time and combine raw data acquired by the Renishaw RMP600 touch probe and the linear and rotary encoders. The geometric errors of the 5 axis machine were also evaluated by an accurate laser tracer interferometer system. The results were compared to those obtained with the hole bar.

Keywords: volumetric errors, CMM, 3D hole-bar, inter-comparison

Procedia PDF Downloads 363
482 Fabrication of Electrospun Microbial Siderophore-Based Nanofibers: A Wound Dressing Material to Inhibit the Wound Biofilm Formation

Authors: Sita Lakshmi Thyagarajan

Abstract:

Nanofibers will leave no field untouched by its scientific innovations; the medical field is no exception. Electrospinning has proven to be an excellent method for the synthesis of nanofibers which, have attracted the interest for many biomedical applications. The formation of biofilms in wounds often leads to chronic infections that are difficult to treat with antibiotics. In order to minimize the biofilms and enhance the wound healing, preparation of potential nanofibers was focused. In this study, siderophore incorporated nanofibers were electrospun using biocompatible polymers onto the collagen scaffold and were fabricated into a biomaterial suitable for the inhibition of biofilm formation. The purified microbial siderophore was blended with Poly-L-lactide (PLLA) and poly (ethylene oxide) PEO in a suitable solvent. Fabrication of siderophore blended nanofibers onto the collagen surface was done using standard protocols. The fabricated scaffold was subjected to physical-chemical characterization. The results indicated that the fabrication processing parameters of nanofiberous scaffold was found to possess the characteristics expected of the potential scaffold with nanoscale morphology and microscale arrangement. The influence of Poly-L-lactide (PLLA) and poly (ethylene oxide) PEO solution concentration, applied voltage, tip-to-collector distance, feeding rate, and collector speed were studied. The optimal parameters such as the ratio of Poly-L-lactide (PLLA) and poly (ethylene oxide) PEO concentration, applied voltage, tip-to-collector distance, feeding rate, collector speed were finalized based on the trial and error experiments. The fibers were found to have a uniform diameter with an aligned morphology. The overall study suggests that the prepared siderophore entrapped nanofibers could be used as a potent tool for wound dressing material for inhibition of biofilm formation.

Keywords: biofilms, electrospinning, nano-fibers, siderophore, tissue engineering scaffold

Procedia PDF Downloads 106
481 Seismic Impact and Design on Buried Pipelines

Authors: T. Schmitt, J. Rosin, C. Butenweg

Abstract:

Seismic design of buried pipeline systems for energy and water supply is not only important for plant and operational safety, but in particular for the maintenance of supply infrastructure after an earthquake. Past earthquakes have shown the vulnerability of pipeline systems. After the Kobe earthquake in Japan in 1995 for instance, in some regions the water supply was interrupted for almost two months. The present paper shows special issues of the seismic wave impacts on buried pipelines, describes calculation methods, proposes approaches and gives calculation examples. Buried pipelines are exposed to different effects of seismic impacts. This paper regards the effects of transient displacement differences and resulting tensions within the pipeline due to the wave propagation of the earthquake. Other effects are permanent displacements due to fault rupture displacements at the surface, soil liquefaction, landslides and seismic soil compaction. The presented model can also be used to calculate fault rupture induced displacements. Based on a three-dimensional Finite Element Model parameter studies are performed to show the influence of several parameters such as incoming wave angle, wave velocity, soil depth and selected displacement time histories. In the computer model, the interaction between the pipeline and the surrounding soil is modeled with non-linear soil springs. A propagating wave is simulated affecting the pipeline punctually independently in time and space. The resulting stresses mainly are caused by displacement differences of neighboring pipeline segments and by soil-structure interaction. The calculation examples focus on pipeline bends as the most critical parts. Special attention is given to the calculation of long-distance heat pipeline systems. Here, in regular distances expansion bends are arranged to ensure movements of the pipeline due to high temperature. Such expansion bends are usually designed with small bending radii, which in the event of an earthquake lead to high bending stresses at the cross-section of the pipeline. Therefore, Karman's elasticity factors, as well as the stress intensity factors for curved pipe sections, must be taken into account. The seismic verification of the pipeline for wave propagation in the soil can be achieved by observing normative strain criteria. Finally, an interpretation of the results and recommendations are given taking into account the most critical parameters.

Keywords: buried pipeline, earthquake, seismic impact, transient displacement

Procedia PDF Downloads 165
480 Maturity Classification of Oil Palm Fresh Fruit Bunches Using Thermal Imaging Technique

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Reza Ehsani, Hawa Ze Jaffar, Ishak Aris

Abstract:

Ripeness estimation of oil palm fresh fruit is important processes that affect the profitableness and salability of oil palm fruits. The adulthood or ripeness of the oil palm fruits influences the quality of oil palm. Conventional procedure includes physical grading of Fresh Fruit Bunches (FFB) maturity by calculating the number of loose fruits per bunch. This physical classification of oil palm FFB is costly, time consuming and the results may have human error. Hence, many researchers try to develop the methods for ascertaining the maturity of oil palm fruits and thereby, deviously the oil content of distinct palm fruits without the need for exhausting oil extraction and analysis. This research investigates the potential of infrared images (Thermal Images) as a predictor to classify the oil palm FFB ripeness. A total of 270 oil palm fresh fruit bunches from most common cultivar of oil palm bunches Nigresens according to three maturity categories: under ripe, ripe and over ripe were collected. Each sample was scanned by the thermal imaging cameras FLIR E60 and FLIR T440. The average temperature of each bunches were calculated by using image processing in FLIR Tools and FLIR ThermaCAM researcher pro 2.10 environment software. The results show that temperature content decreased from immature to over mature oil palm FFBs. An overall analysis-of-variance (ANOVA) test was proved that this predictor gave significant difference between underripe, ripe and overripe maturity categories. This shows that the temperature as predictors can be good indicators to classify oil palm FFB. Classification analysis was performed by using the temperature of the FFB as predictors through Linear Discriminant Analysis (LDA), Mahalanobis Discriminant Analysis (MDA), Artificial Neural Network (ANN) and K- Nearest Neighbor (KNN) methods. The highest overall classification accuracy was 88.2% by using Artificial Neural Network. This research proves that thermal imaging and neural network method can be used as predictors of oil palm maturity classification.

Keywords: artificial neural network, maturity classification, oil palm FFB, thermal imaging

Procedia PDF Downloads 334
479 Exploring the Prebiotic Potential of Glucosamine

Authors: Shilpi Malik, Ramneek Kaur, Archita Gupta, Deepshikha Yadav, Ashwani Mathur, Manisha Singh

Abstract:

Glucosamine (GS) is the most abundant naturally occurring amino monosaccharide and is normally produced in human body via cellular glucose metabolism. It is regarded as the building block of cartilage matrix and is also an essential component of cartilage matrix repair mechanism. Besides that, it can also be explored for its prebiotic potential as many bacterial species are known to utilize the amino sugar by acquiring them to form peptidoglycans and lipopolysaccharides in the bacterial cell wall. Glucosamine can therefore be considered for its fermentation by bacterial species present in the gut. Current study is focused on exploring the potential of glucosamine as prebiotic. The studies were done to optimize considerable concentration of GS to reach GI tract and being fermented by the complex gut microbiota and food grade GS was added to various Simulated Fluids of Gastro-Intestinal Tract (GIT) such as Simulated Saliva, Gastric Fluid (Fast and Fed State), Colonic fluid, etc. to detect its degradation. Since it was showing increase in microbial growth (CFU) with time, GS was Further, encapsulated to increase its residential time in the gut, which exhibited improved resistance to the simulated Gut conditions. Moreover, prepared microspehres were optimized and characterized for their encapsulation efficiency and toxicity. To further substantiate the prebiotic activity of Glucosamine, studies were also performed to determine the effect of Glucosamine on the known probiotic bacterial species, i.e. Lactobacillus delbrueckii (MTCC 911) and Bifidobacteriumbifidum (MTCC 5398). Culture conditions for glucosamine will be added in MRS media in anaerobic tube at 0.20%, 0.40%, 0.60%, 0.80%, and 1.0%, respectively. MRS media without GS was included in this experiment as the control. All samples were autoclaved at 118° C for 15 min. Active culture was added at 5% (v/v) to each anaerobic tube after cooling to room temperature and incubated at 37° C then determined biomass and pH and viable count at incubation 18h. The experiment was completed in triplicate and the results were presented as Mean ± SE (Standard error).The experimental results are conclusive and suggest Glucosamine to hold prebiotic properties.

Keywords: gastro intestinal tract, microspheres, peptidoglycans, simulated fluid

Procedia PDF Downloads 313
478 Power Recovery from Waste Air of Mine Ventilation Fans Using Wind Turbines

Authors: Soumyadip Banerjee, Tanmoy Maity

Abstract:

The recovery of power from waste air generated by mine ventilation fans presents a promising avenue for enhancing energy efficiency in mining operations. This abstract explores the feasibility and benefits of utilizing turbine generators to capture the kinetic energy present in waste air and convert it into electrical power. By integrating turbine generator systems into mine ventilation infrastructures, the potential to harness and utilize the previously untapped energy within the waste air stream is realized. This study examines the principles underlying turbine generator technology and its application within the context of mine ventilation systems. The process involves directing waste air from ventilation fans through specially designed turbines, where the kinetic energy of the moving air is converted into rotational motion. This mechanical energy is then transferred to connected generators, which convert it into electrical power. The recovered electricity can be employed for various on-site applications, including powering mining equipment, lighting, and control systems. The benefits of power recovery from waste air using turbine generators are manifold. Improved energy efficiency within the mining environment results in reduced dependence on external power sources and associated cost savings. Additionally, this approach contributes to environmental sustainability by utilizing a previously wasted resource for power generation. Resource conservation is further enhanced, aligning with modern principles of sustainable mining practices. However, successful implementation requires careful consideration of factors such as waste air characteristics, turbine design, generator efficiency, and integration into existing mine infrastructure. Maintenance and monitoring protocols are necessary to ensure consistent performance and longevity of the turbine generator systems. While there is an initial investment associated with equipment procurement, installation, and integration, the long-term benefits of reduced energy costs and environmental impact make this approach economically viable. In conclusion, the recovery of power from waste air from mine ventilation fans using turbine generators offers a tangible solution to enhance energy efficiency and sustainability within mining operations. By capturing and converting the kinetic energy of waste air into usable electrical power, mines can optimize resource utilization, reduce operational costs, and contribute to a greener future for the mining industry.

Keywords: waste to energy, wind power generation, exhaust air, power recovery

Procedia PDF Downloads 10
477 Evaluation of Key Performance Indicators as Determinants of Dividend Paid on Ordinary Shares in Nigeria Banking Sector

Authors: Oliver Ikechukwu Inyiama, Boniface Uche Ugwuanyi

Abstract:

The aim of the research is to evaluate the key financial performance indicators that help both managers and their shareholders of Nigerian Banks to determine the appropriate dividend payout to their ordinary shareholders in an accounting year. Profitability, total asset, and earnings of commercial banks were selected as key performance indicators in Nigeria Banking Sector. They represent the independent variables of the study while dividend per share is the proxy for the dividend paid on ordinary shares which represent the dependent variable. The effect of profitability, total asset and earnings on dividend per share were evaluated through the ordinary least square method of multiple regression analysis. Test for normality of frequency distribution was conducted through descriptive statistics such as Jacque Bera Statistic, skewness and kurtosis. Rate of dividend payout was subsequently applied as an alternate dependent variable to test for robustness of the earlier results. The 64% adjusted R-squared of the pooled data indicates that profitability, total asset, and earnings explain the variation in dividend per share during the period under research while the remaining 36% variation in dividend per share could be explained by changes in other variables not captured by this study as well as the error term. The study concentrated on four leading Nigeria Commercial Banks namely; First Bank of Nigeria Plc, GTBank Plc, United Bank for Africa Plc and Zenith International Bank Plc. Dividend per share was found to be positively affected by total assets and earnings of the commercial banks. However, profitability which was proxied by profit after tax had a negative effect on dividend per share. The implication of the findings is that commercial banks in Nigeria pay more dividend when they are having a dwindling fortune in order to retain the confidence of the shareholders provided their gross earnings and size is on the increase. Therefore, the management and board of directors of Nigeria commercial banks should apply decent marketing strategies to enhance earnings through investment in profitable ventures for an improved dividend payout rate.

Keywords: assets, banks, indicators, performance, profitability, shares

Procedia PDF Downloads 137
476 Computer Simulation Approach in the 3D Printing Operations of Surimi Paste

Authors: Timilehin Martins Oyinloye, Won Byong Yoon

Abstract:

Simulation technology is being adopted in many industries, with research focusing on the development of new ways in which technology becomes embedded within production, services, and society in general. 3D printing (3DP) technology is fast developing in the food industry. However, the limited processability of high-performance material restricts the robustness of the process in some cases. Significantly, the printability of materials becomes the foundation for extrusion-based 3DP, with residual stress being a major challenge in the printing of complex geometry. In many situations, the trial-a-error method is being used to determine the optimum printing condition, which results in time and resource wastage. In this report, the analysis of 3 moisture levels for surimi paste was investigated for an optimum 3DP material and printing conditions by probing its rheology, flow characteristics in the nozzle, and post-deposition process using the finite element method (FEM) model. Rheological tests revealed that surimi pastes with 82% moisture are suitable for 3DP. According to the FEM model, decreasing the nozzle diameter from 1.2 mm to 0.6 mm, increased the die swell from 9.8% to 14.1%. The die swell ratio increased due to an increase in the pressure gradient (1.15107 Pa to 7.80107 Pa) at the nozzle exit. The nozzle diameter influenced the fluid properties, i.e., the shear rate, velocity, and pressure in the flow field, as well as the residual stress and the deformation of the printed sample, according to FEM simulation. The post-printing stability of the model was investigated using the additive layer manufacturing (ALM) model. The ALM simulation revealed that the residual stress and total deformation of the sample were dependent on the nozzle diameter. A small nozzle diameter (0.6 mm) resulted in a greater total deformation (0.023), particularly at the top part of the model, which eventually resulted in the sample collapsing. As the nozzle diameter increased, the accuracy of the model improved until the optimum nozzle size (1.0 mm). Validation with 3D-printed surimi products confirmed that the nozzle diameter was a key parameter affecting the geometry accuracy of 3DP of surimi paste.

Keywords: 3D printing, deformation analysis, die swell, numerical simulation, surimi paste

Procedia PDF Downloads 47
475 The Requirements of Developing a Framework for Successful Adoption of Quality Management Systems in the Construction Industry

Authors: Mohammed Ali Ahmed, Vaughan Coffey, Bo Xia

Abstract:

Quality management systems (QMSs) in the construction industry are often implemented to ensure that sufficient effort is made by companies to achieve the required levels of quality for clients. Attainment of these quality levels can result in greater customer satisfaction, which is fundamental to ensure long-term competitiveness for construction companies. However, the construction sector is still lagging behind other industries in terms of its successful adoption of QMSs, due to the relative lack of acceptance of the benefits of these systems among industry stakeholders, as well as from other barriers related to implementing them. Thus, there is a critical need to undertake a detailed and comprehensive exploration of adoption of QMSs in the construction sector. This paper comprehensively investigates in the construction sector setting, the impacts of all the salient factors surrounding successful implementation of QMSs in building organizations, especially those of external factors. This study is part of an ongoing PhD project, which aims to develop a new framework that integrates both internal and external factors affecting QMS implementation. To achieve the paper aim and objectives, interviews will be conducted to define the external factors influencing the adoption of QMSs, and to obtain holistic critical success factors (CSFs) for implementing these systems. In the next stage of data collection, a questionnaire survey will be developed to investigate the prime barriers facing the adoption of QMSs, the CSFs for their implementation, and the external factors affecting the adoption of these systems. Following the survey, case studies will be undertaken to validate and explain in greater detail the real effects of these factors on QMSs adoption. Specifically, this paper evaluates the effects of the external factors in terms of their impact on implementation success within the selected case studies. Using findings drawn from analyzing the data obtained from these various approaches, specific recommendations for the successful implementation of QMSs will be presented, and an operational framework will be developed. Finally, through a focus group, the findings of the study and the new developed framework will be validated. Ultimately, this framework will be made available to the construction industry to facilitate the greater adoption and implementation of QMSs. In addition, deployment of the applicable recommendations suggested by the study will be shared with the construction industry to more effectively help construction companies to implement QMSs, and overcome the barriers experienced by businesses, thus promoting the achievement of higher levels of quality and customer satisfaction.

Keywords: barriers, critical success factors, external factors, internal factors, quality management systems

Procedia PDF Downloads 163
474 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 17
473 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 54
472 Evaluation of the Effect of Milk Recording Intervals on the Accuracy of an Empirical Model Fitted to Dairy Sheep Lactations

Authors: L. Guevara, Glória L. S., Corea E. E, A. Ramírez-Zamora M., Salinas-Martinez J. A., Angeles-Hernandez J. C.

Abstract:

Mathematical models are useful for identifying the characteristics of sheep lactation curves to develop and implement improved strategies. However, the accuracy of these models is influenced by factors such as the recording regime, mainly the intervals between test day records (TDR). The current study aimed to evaluate the effect of different TDR intervals on the goodness of fit of the Wood model (WM) applied to dairy sheep lactations. A total of 4,494 weekly TDRs from 156 lactations of dairy crossbred sheep were analyzed. Three new databases were generated from the original weekly TDR data (7D), comprising intervals of 14(14D), 21(21D), and 28(28D) days. The parameters of WM were estimated using the “minpack.lm” package in the R software. The shape of the lactation curve (typical and atypical) was defined based on the WM parameters. The goodness of fit was evaluated using the mean square of prediction error (MSPE), Root of MSPE (RMSPE), Akaike´s Information Criterion (AIC), Bayesian´s Information Criterion (BIC), and the coefficient of correlation (r) between the actual and estimated total milk yield (TMY). WM showed an adequate estimate of TMY regardless of the TDR interval (P=0.21) and shape of the lactation curve (P=0.42). However, we found higher values of r for typical curves compared to atypical curves (0.9vs.0.74), with the highest values for the 28D interval (r=0.95). In the same way, we observed an overestimated peak yield (0.92vs.6.6 l) and underestimated time of peak yield (21.5vs.1.46) in atypical curves. The best values of RMSPE were observed for the 28D interval in both lactation curve shapes. The significant lowest values of AIC (P=0.001) and BIC (P=0.001) were shown by the 7D interval for typical and atypical curves. These results represent the first approach to define the adequate interval to record the regime of dairy sheep in Latin America and showed a better fitting for the Wood model using a 7D interval. However, it is possible to obtain good estimates of TMY using a 28D interval, which reduces the sampling frequency and would save additional costs to dairy sheep producers.

Keywords: gamma incomplete, ewes, shape curves, modeling

Procedia PDF Downloads 51
471 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.

Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy

Procedia PDF Downloads 119
470 A Study on Thermal and Flow Characteristics by Solar Radiation for Single-Span Greenhouse by Computational Fluid Dynamics Simulation

Authors: Jonghyuk Yoon, Hyoungwoon Song

Abstract:

Recently, there are lots of increasing interest in a smart farming that represents application of modern Information and Communication Technologies (ICT) into agriculture since it provides a methodology to optimize production efficiencies by managing growing conditions of crops automatically. In order to obtain high performance and stability for smart greenhouse, it is important to identify the effect of various working parameters such as capacity of ventilation fan, vent opening area and etc. In the present study, a 3-dimensional CFD (Computational Fluid Dynamics) simulation for single-span greenhouse was conducted using the commercial program, Ansys CFX 18.0. The numerical simulation for single-span greenhouse was implemented to figure out the internal thermal and flow characteristics. In order to numerically model solar radiation that spread over a wide range of wavelengths, the multiband model that discretizes the spectrum into finite bands of wavelength based on Wien’s law is applied to the simulation. In addition, absorption coefficient of vinyl varied with the wavelength bands is also applied based on Beer-Lambert Law. To validate the numerical method applied herein, the numerical results of the temperature at specific monitoring points were compared with the experimental data. The average error rates (12.2~14.2%) between them was shown and numerical results of temperature distribution are in good agreement with the experimental data. The results of the present study can be useful information for the design of various greenhouses. This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry and Fisheries (IPET) through Advanced Production Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA)(315093-03).

Keywords: single-span greenhouse, CFD (computational fluid dynamics), solar radiation, multiband model, absorption coefficient

Procedia PDF Downloads 117
469 Patient Safety Culture in Brazilian Hospitals from Nurse's Team Perspective

Authors: Carmen Silvia Gabriel, Dsniele Bernardi da Costa, Andrea Bernardes, Sabrina Elias Mikael, Daniele da Silva Ramos

Abstract:

The goal of this quantitative study is to investigate patient safety culture from the perspective of professional from the hospital nursing team.It was conducted in two Brazilian hospitals,.The sample included 282 nurses Data collection occurred in 2013, through the questionnaire Hospital Survey on Patient Safety Culture.Based on the assessment of the dimensions is stressed that, in the dimension teamwork across hospital units, 69.4% of professionals agree that when a lot of work needs to be done quickly, they work together as a team; about the dimension supervisor/ manager expectations and actions promoting safety, 70.2% agree that their supervisor overlooks patient safety problems.Related to organizational learning and continuous improvement, 56.5% agree that there is evaluation of the effectiveness of the changes after its implementation.On hospital management support for patient safety, 52.8% refer that the actions of hospital management show that patient safety is a top priority.On the overall perception of patient safety, 57.2% disagree that patient safety is never compromised due to higher amount of work to be completed.In what refers to feedback and communication about error, 57.7% refer that always and usually receive such information. Relative to communication openness, 42.9% said they never or rarely feel free to question the decisions / actions of their superiors.On frequency of event reporting, 64.7% said often and always notify events with no damages to patients..About teamwork across hospital units is noted similarity between the percentages of agreement and disagreement, as on the item there is a good cooperation among hospital units that need to work together, that indicates 41.4% and 40.5% respectively.Related to adequacy of professionals, 77.8 % disagree on the existence of sufficient amount of employees to do the job, 52.4% agree that shift changes are problematic for patients. On nonpunitive response to errors, 71.7% indicate that when an event is reported it seems that the focus is on the person.On the patient safety grade of the institution, 41.6 % classified it as very good. it is concluded that there are positive points in the safety culture, and some weaknesses as a punitive culture and impaired patient safety due to work overload .

Keywords: quality of health care, health services evaluation, safety culture, patient safety, nursing team

Procedia PDF Downloads 282
468 A Preliminary Kinematic Comparison of Vive and Vicon Systems for the Accurate Tracking of Lumbar Motion

Authors: Yaghoubi N., Moore Z., Van Der Veen S. M., Pidcoe P. E., Thomas J. S., Dexheimer B.

Abstract:

Optoelectronic 3D motion capture systems, such as the Vicon kinematic system, are widely utilized in biomedical research to track joint motion. These systems are considered powerful and accurate measurement tools with <2 mm average error. However, these systems are costly and may be difficult to implement and utilize in a clinical setting. 3D virtual reality (VR) is gaining popularity as an affordable and accessible tool to investigate motor control and perception in a controlled, immersive environment. The HTC Vive VR system includes puck-style trackers that seamlessly integrate into its VR environments. These affordable, wireless, lightweight trackers may be more feasible for clinical kinematic data collection. However, the accuracy of HTC Vive Trackers (3.0), when compared to optoelectronic 3D motion capture systems, remains unclear. In this preliminary study, we compared the HTC Vive Tracker system to a Vicon kinematic system in a simulated lumbar flexion task. A 6-DOF robot arm (SCORBOT ER VII, Eshed Robotec/RoboGroup, Rosh Ha’Ayin, Israel) completed various reaching movements to mimic increasing levels of hip flexion (15°, 30°, 45°). Light reflective markers, along with one HTC Vive Tracker (3.0), were placed on the rigid segment separating the elbow and shoulder of the robot. We compared position measures simultaneously collected from both systems. Our preliminary analysis shows no significant differences between the Vicon motion capture system and the HTC Vive tracker in the Z axis, regardless of hip flexion. In the X axis, we found no significant differences between the two systems at 15 degrees of hip flexion but minimal differences at 30 and 45 degrees, ranging from .047 cm ± .02 SE (p = .03) at 30 degrees hip flexion to .194 cm ± .024 SE (p < .0001) at 45 degrees of hip flexion. In the Y axis, we found a minimal difference for 15 degrees of hip flexion only (.743 cm ± .275 SE; p = .007). This preliminary analysis shows that the HTC Vive Tracker may be an appropriate, affordable option for gross motor motion capture when the Vicon system is not available, such as in clinical settings. Further research is needed to compare these two motion capture systems in different body poses and for different body segments.

Keywords: lumbar, vivetracker, viconsystem, 3dmotion, ROM

Procedia PDF Downloads 71
467 A Theoretical Framework of Patient Autonomy in a High-Tech Care Context

Authors: Catharina Lindberg, Cecilia Fagerstrom, Ania Willman

Abstract:

Patients in high-tech care environments are usually dependent on both formal/informal caregivers and technology, highlighting their vulnerability and challenging their autonomy. Autonomy presumes that a person has education, experience, self-discipline and decision-making capacity. Reference to autonomy in relation to patients in high-tech care environments could, therefore, be considered paradoxical, as in most cases these persons have impaired physical and/or metacognitive capacity. Therefore, to understand the prerequisites for patients to experience autonomy in high-tech care environments and to support them, there is a need to enhance knowledge and understanding of the concept of patient autonomy in this care context. The development of concepts and theories in a practice discipline such as nursing helps to improve both nursing care and nursing education. Theoretical development is important when clarifying a discipline, hence, a theoretical framework could be of use to nurses in high-tech care environments to support and defend the patient’s autonomy. A meta-synthesis was performed with the intention to be interpretative and not aggregative in nature. An amalgamation was made of the results from three previous studies, carried out by members of the same research group, focusing on the phenomenon of patient autonomy from a patient perspective within a caring context. Three basic approaches to theory development: derivation, synthesis, and analysis provided an operational structure that permitted the researchers to move back and forth between these approaches during their work in developing a theoretical framework. The results from the synthesis delineated that patient autonomy in a high-tech care context is: To be in control though trust, co-determination, and transition in everyday life. The theoretical framework contains several components creating the prerequisites for patient autonomy. Assumptions and propositional statements that guide theory development was also outlined, as were guiding principles for use in day-to-day nursing care. Four strategies used by patients to remain or obtain patient autonomy in high-tech care environments were revealed: the strategy of control, the strategy of partnership, the strategy of trust, and the strategy of transition. This study suggests an extended knowledge base founded on theoretical reasoning about patient autonomy, providing an understanding of the strategies used by patients to achieve autonomy in the role of patient, in high-tech care environments. When possessing knowledge about the patient perspective of autonomy, the nurse/carer can avoid adopting a paternalistic or maternalistic approach. Instead, the patient can be considered to be a partner in care, allowing care to be provided that supports him/her in remaining/becoming an autonomous person in the role of patient.

Keywords: autonomy, caring, concept development, high-tech care, theory development

Procedia PDF Downloads 188
466 Promoting 'One Health' Surveillance and Response Approach Implementation Capabilities against Emerging Threats and Epidemics Crisis Impact in African Countries

Authors: Ernest Tambo, Ghislaine Madjou, Jeanne Y. Ngogang, Shenglan Tang, Zhou XiaoNong

Abstract:

Implementing national to community-based 'One Health' surveillance approach for human, animal and environmental consequences mitigation offers great opportunities and value-added in sustainable development and wellbeing. 'One Health' surveillance approach global partnerships, policy commitment and financial investment are much needed in addressing the evolving threats and epidemics crises mitigation in African countries. The paper provides insights onto how China-Africa health development cooperation in promoting “One Health” surveillance approach in response advocacy and mitigation. China-Africa health development initiatives provide new prospects in guiding and moving forward appropriate and evidence-based advocacy and mitigation management approaches and strategies in attaining Universal Health Coverage (UHC) and Sustainable Development Goals (SDGs). Early and continuous quality and timely surveillance data collection and coordinated information sharing practices in malaria and other diseases are demonstrated in Comoros, Zanzibar, Ghana and Cameroon. Improvements of variety of access to contextual sources and network of data sharing platforms are needed in guiding evidence-based and tailored detection and response to unusual hazardous events. Moreover, understanding threats and diseases trends, frontline or point of care response delivery is crucial to promote integrated and sustainable targeted local, national “One Health” surveillance and response approach needs implementation. Importantly, operational guidelines are vital in increasing coherent financing and national workforce capacity development mechanisms. Strengthening participatory partnerships, collaboration and monitoring strategies in achieving global health agenda effectiveness in Africa. At the same enhancing surveillance data information streams reporting and dissemination usefulness in informing policies decisions, health systems programming and financial mobilization and prioritized allocation pre, during and post threats and epidemics crises programs strengths and weaknesses. Thus, capitalizing on “One Health” surveillance and response approach advocacy and mitigation implementation is timely in consolidating Africa Union 2063 agenda and Africa renaissance capabilities and expectations.

Keywords: Africa, one health approach, surveillance, response

Procedia PDF Downloads 401
465 Prediction of Ionic Liquid Densities Using a Corresponding State Correlation

Authors: Khashayar Nasrifar

Abstract:

Ionic liquids (ILs) exhibit particular properties exemplified by extremely low vapor pressure and high thermal stability. The properties of ILs can be tailored by proper selection of cations and anions. As such, ILs are appealing as potential solvents to substitute traditional solvents with high vapor pressure. One of the IL properties required in chemical and process design is density. In developing corresponding state liquid density correlations, scaling hypothesis is often used. The hypothesis expresses the temperature dependence of saturated liquid densities near the vapor-liquid critical point as a function of reduced temperature. Extending the temperature dependence, several successful correlations were developed to accurately correlate the densities of normal liquids from the triple point to a critical point. Applying mixing rules, the liquid density correlations are extended to liquid mixtures as well. ILs are not molecular liquids, and they are not classified among normal liquids either. Also, ILs are often used where the condition is far from equilibrium. Nevertheless, in calculating the properties of ILs, the use of corresponding state correlations would be useful if no experimental data were available. With well-known generalized saturated liquid density correlations, the accuracy in predicting the density of ILs is not that good. An average error of 4-5% should be expected. In this work, a data bank was compiled. A simplified and concise corresponding state saturated liquid density correlation is proposed by phenomena-logically modifying reduced temperature using the temperature-dependence for an interacting parameter of the Soave-Redlich-Kwong equation of state. This modification improves the temperature dependence of the developed correlation. Parametrization was next performed to optimize the three global parameters of the correlation. The correlation was then applied to the ILs in our data bank with satisfactory predictions. The correlation of IL density applied at 0.1 MPa and was tested with an average uncertainty of around 2%. No adjustable parameter was used. The critical temperature, critical volume, and acentric factor were all required. Methods to extend the predictions to higher pressures (200 MPa) were also devised. Compared to other methods, this correlation was found more accurate. This work also presents the chronological order of developing such correlations dealing with ILs. The pros and cons are also expressed.

Keywords: correlation, corresponding state principle, ionic liquid, density

Procedia PDF Downloads 106
464 Sludge Marvel (Densification): The Ultimate Solution For Doing More With Less Effort!

Authors: Raj Chavan

Abstract:

At present, the United States is home to more than 14,000 Water Resource Recovery Facilities (WRRFs), of which approximately 35% have implemented nutrient limits of some kind. These WRRFs contribute 10 to 15% of the total nutrient burden to surface rivers in the United States and account for approximately 1% of total power demand and 2% of total greenhouse gas emissions (GHG). There are several factors that have influenced the development of densification technologies in the direction of more compact and energy-efficient nutrient removal processes. Prior to surface water discharge, existing facilities that necessitate capacity expansion or biomass densification for greater treatability within the same footprint are being subjected to stricter nutrient removal requirements. Densification of activated sludge as a method for nutrient removal and process intensification at WRRFs has garnered considerable attention in recent times. The biological processes take place within the aerobic sediment granules, which form the basis of the technology. The possibility of generating granular sludge through continuous (or conventional) activated sludge processes (CAS) or densification of biomass through the transfer of activated sludge flocs to a denser biomass aggregate as an exceptionally efficient intensification technique has generated considerable interest. This presentation aims to furnish attendees with a foundational comprehension of densification through the illustration of practical concerns and insights. The subsequent subjects will be deliberated upon. What are some potential techniques for producing and preserving densified granules? What processes are responsible for the densification of biological flocs? How do physical selectors contribute to the process of biological flocs becoming denser? What viable strategies exist for the management of densified biological flocs, and which design parameters of physical selection influence the retention of densified biological flocs? determining operational solutions for floc and granule customization in order to meet capacity and performance objectives? The answers to these pivotal questions will be derived from existing full-scale treatment facilities, bench-scale and pilot-scale investigations, and existing literature data. By the conclusion of the presentation, the audience will possess a fundamental comprehension of the densification concept and its significance in attaining effective effluent treatment. Additionally, case studies pertaining to the design and operation of densification procedures will be incorporated into the presentation.

Keywords: densification, intensification, nutrient removal, granular sludge

Procedia PDF Downloads 53
463 Bank Internal Controls and Credit Risk in Europe: A Quantitative Measurement Approach

Authors: Ellis Kofi Akwaa-Sekyi, Jordi Moreno Gené

Abstract:

Managerial actions which negatively profile banks and impair corporate reputation are addressed through effective internal control systems. Disregard for acceptable standards and procedures for granting credit have affected bank loan portfolios and could be cited for the crises in some European countries. The study intends to determine the effectiveness of internal control systems, investigate whether perceived agency problems exist on the part of board members and to establish the relationship between internal controls and credit risk among listed banks in the European Union. Drawing theoretical support from the behavioural compliance and agency theories, about seventeen internal control variables (drawn from the revised COSO framework), bank-specific, country, stock market and macro-economic variables will be involved in the study. A purely quantitative approach will be employed to model internal control variables covering the control environment, risk management, control activities, information and communication and monitoring. Panel data from 2005-2014 on listed banks from 28 European Union countries will be used for the study. Hypotheses will be tested and the Generalized Least Squares (GLS) regression will be run to establish the relationship between dependent and independent variables. The Hausman test will be used to select whether random or fixed effect model will be used. It is expected that listed banks will have sound internal control systems but their effectiveness cannot be confirmed. A perceived agency problem on the part of the board of directors is expected to be confirmed. The study expects significant effect of internal controls on credit risk. The study will uncover another perspective of internal controls as not only an operational risk issue but credit risk too. Banks will be cautious that observing effective internal control systems is an ethical and socially responsible act since the collapse (crisis) of financial institutions as a result of excessive default is a major contagion. This study deviates from the usual primary data approach to measuring internal control variables and rather models internal control variables in a quantitative approach for the panel data. Thus a grey area in approaching the revised COSO framework for internal controls is opened for further research. Most bank failures and crises could be averted if effective internal control systems are religiously adhered to.

Keywords: agency theory, credit risk, internal controls, revised COSO framework

Procedia PDF Downloads 288
462 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 60
461 Correlation Study between Clinical and Radiological Findings in Knee Osteoarthritis

Authors: Nabil A. A. Mohamed, Alaa A. A. Balbaa, Khaled E. Ayad

Abstract:

Osteoarthritis (OA) of the knee is the most common form of arthritis and leads to more activity limitations (e.g., disability in walking and stair climbing) than any other disease, especially in the elderly. Recently, impaired proprioceptive accuracy of the knee has been proposed as a local factor in the onset and progression of radiographic knee OA (ROA). Purpose: To compare the clinical and radiological findings in healthy with that of knee OA. Also, to determine if there is a correlation between the clinical and radiological findings in patients with knee OA. Subjects: Fifty one patients diagnosed as unilateral or bilateral knee OA with age ranged between 35-70 years, from both gender without any previous history of knee trauma or surgery, and twenty one normal subjects with age ranged from 35 - 68 years. METHODS: peak torque/body weight (PT/BW) was recorded from knee extensors at isokinetic isometric mode at angle of 45 degree. Also, the Absolute Angular Error was recorded at 45O and 30O to measure joint position sense (JPS). They made anteroposterior (AP) plain X-rays from standing semiflexed knee position and their average score of Timed Up and Go test(TUG) and WOMAC were recorded as a measure of knee pain, stiffness and function. Comparison between the mean values of different variables in the two groups was performed using unpaired student t test. The P value less or equal to 0.05 was considered significant. Results: There were significant differences between the studied variables between the experimental and control groups except the values of AAE at 30O. Also, there were no significant correlation between the clinical findings (pain, function, muscle strength and proprioception) and the severity of arthritic changes in X-rays. CONCLUSION: From the finding of the current study we can conclude that there were a significant difference between the both groups in all studied parameters (the WOMAC, functional level, quadriceps muscle strength and the joint proprioception). Also this study did not support the dependency on radiological findings in management of knee OA as the radiological features did not necessarily indicate the level of structural damage of patients with knee OA and we should consider the clinical features in our treatment plan.

Keywords: joint position sense, peak torque, proprioception, radiological knee osteoarthritis

Procedia PDF Downloads 285
460 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 49
459 Organ Dose Calculator for Fetus Undergoing Computed Tomography

Authors: Choonsik Lee, Les Folio

Abstract:

Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.

Keywords: computed tomography, fetal dose, pregnant women, radiation dose

Procedia PDF Downloads 118
458 Development of a Bi-National Thyroid Cancer Clinical Quality Registry

Authors: Liane J. Ioannou, Jonathan Serpell, Joanne Dean, Cino Bendinelli, Jenny Gough, Dean Lisewski, Julie Miller, Win Meyer-Rochow, Stan Sidhu, Duncan Topliss, David Walters, John Zalcberg, Susannah Ahern

Abstract:

Background: The occurrence of thyroid cancer is increasing throughout the developed world, including Australia and New Zealand, and since the 1990s has become the fastest increasing malignancy. Following the success of a number of institutional databases that monitor outcomes after thyroid surgery, the Australian and New Zealand Endocrine Surgeons (ANZES) agreed to auspice the development of a bi-national thyroid cancer registry. Objectives: To establish a bi-national population-based clinical quality registry with the aim of monitoring and improving the quality of care provided to patients diagnosed with thyroid cancer in Australia and New Zealand. Patients and Methods: The Australian and New Zealand Thyroid Cancer Registry (ANZTCR) captures clinical data for all patients, over the age of 18 years, diagnosed with thyroid cancer, confirmed by histopathology report, that have been diagnosed, assessed or treated at a contributing hospital. Data is collected by endocrine surgeons using a web-based interface, REDCap, primarily via direct data entry. Results: A multi-disciplinary Steering Committee was formed, and with operational support from Monash University the ANZTCR was established in early 2017. The pilot phase of the registry is currently operating in Victoria, New South Wales, Queensland, Western Australia and South Australia, with over 30 sites expected to come on board across Australia and New Zealand in 2018. A modified-Delphi process was undertaken to determine the key quality indicators to be reported by the registry, and a minimum dataset was developed comprising information regarding thyroid cancer diagnosis, pathology, surgery, and 30-day follow up. Conclusion: There are very few established thyroid cancer registries internationally, yet clinical quality registries have shown valuable outcomes and patient benefits in other cancers. The establishment of the ANZTCR provides the opportunity for Australia and New Zealand to further understand the current practice in the treatment of thyroid cancer and reasons for variation in outcomes. The engagement of endocrine surgeons in supporting this initiative is crucial. While the pilot registry has a focus on early clinical outcomes, it is anticipated that future collection of longer-term outcome data particularly for patients with the poor prognostic disease will add significant further value to the registry.

Keywords: thyroid cancer, clinical registry, population health, quality improvement

Procedia PDF Downloads 174