Search results for: optimizing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 736

Search results for: optimizing

106 AI Applications in Accounting: Transforming Finance with Technology

Authors: Alireza Karimi

Abstract:

Artificial Intelligence (AI) is reshaping various industries, and accounting is no exception. With the ability to process vast amounts of data quickly and accurately, AI is revolutionizing how financial professionals manage, analyze, and report financial information. In this article, we will explore the diverse applications of AI in accounting and its profound impact on the field. Automation of Repetitive Tasks: One of the most significant contributions of AI in accounting is automating repetitive tasks. AI-powered software can handle data entry, invoice processing, and reconciliation with minimal human intervention. This not only saves time but also reduces the risk of errors, leading to more accurate financial records. Pattern Recognition and Anomaly Detection: AI algorithms excel at pattern recognition. In accounting, this capability is leveraged to identify unusual patterns in financial data that might indicate fraud or errors. AI can swiftly detect discrepancies, enabling auditors and accountants to focus on resolving issues rather than hunting for them. Real-Time Financial Insights: AI-driven tools, using natural language processing and computer vision, can process documents faster than ever. This enables organizations to have real-time insights into their financial status, empowering decision-makers with up-to-date information for strategic planning. Fraud Detection and Prevention: AI is a powerful tool in the fight against financial fraud. It can analyze vast transaction datasets, flagging suspicious activities and reducing the likelihood of financial misconduct going unnoticed. This proactive approach safeguards a company's financial integrity. Enhanced Data Analysis and Forecasting: Machine learning, a subset of AI, is used for data analysis and forecasting. By examining historical financial data, AI models can provide forecasts and insights, aiding businesses in making informed financial decisions and optimizing their financial strategies. Artificial Intelligence is fundamentally transforming the accounting profession. From automating mundane tasks to enhancing data analysis and fraud detection, AI is making financial processes more efficient, accurate, and insightful. As AI continues to evolve, its role in accounting will only become more significant, offering accountants and finance professionals powerful tools to navigate the complexities of modern finance. Embracing AI in accounting is not just a trend; it's a necessity for staying competitive in the evolving financial landscape.

Keywords: artificial intelligence, accounting automation, financial analysis, fraud detection, machine learning in finance

Procedia PDF Downloads 36
105 Spatial Distribution and Cluster Analysis of Sexual Risk Behaviors and STIs Reported by Chinese Adults in Guangzhou, China: A Representative Population-Based Study

Authors: Fangjing Zhou, Wen Chen, Brian J. Hall, Yu Wang, Carl Latkin, Li Ling, Joseph D. Tucker

Abstract:

Background: Economic and social reforms designed to open China to the world has been successful, but also appear to have rapidly laid the foundation for the reemergence of STIs since 1980s. Changes in sexual behaviors, relationships, and norms among Chinese contributed to the STIs epidemic. As the massive population moved during the last 30 years, early coital debut, multiple sexual partnerships, and unprotected sex have increased within the general population. Our objectives were to assess associations between residences location, sexual risk behaviors and sexually transmitted infections (STIs) among adults living in Guangzhou, China. Methods: Stratified cluster sampling followed a two-step process was used to select populations aged 18-59 years in Guangzhou, China. Spatial methods including Geographic Information Systems (GIS) were utilized to identify 1400 coordinates with latitude and longitude. Face-to-face household interviews were conducted to collect self-report data on sexual risk behaviors and diagnosed STIs. Kulldorff’s spatial scan statistic was implemented to identify and detect spatial distribution and clusters of sexual risk behaviors and STIs. The presence and location of statistically significant clusters were mapped in the study areas using ArcGIS software. Results: In this study, 1215 of 1400 households attempted surveys, with 368 refusals, resulting in a sample of 751 completed surveys. The prevalence of self-reported sexual risk behaviors was between 5.1% and 50.0%. The self-reported lifetime prevalence of diagnosed STIs was 7.06%. Anal intercourse clustered in an area located along the border within the rural-urban continuum (p=0.001). High rate clusters for alcohol or other drugs using before sex (p=0.008) and migrants who lived in Guangzhou less than one year (p=0.007) overlapped this cluster. Excess cases for sex without a condom (p=0.031) overlapped the cluster for college students (p<0.001). Conclusions: Short-term migrants and college students reported greater sexual risk behaviors. Programs to increase safer sex within these communities to reduce the risk of STIs are warranted in Guangzhou. Spatial analysis identified geographical clusters of sexual risk behaviors, which is critical for optimizing surveillance and targeting control measures for these locations in the future.

Keywords: cluster analysis, migrant, sexual risk behaviors, spatial distribution

Procedia PDF Downloads 310
104 Evaluation of Antarctic Bacteria as Potential Producers of Cellulolytic Enzymes of Industrial Interest

Authors: Claudio Lamilla, Andrés Santos, Vicente Llanquinao, Jocelyn Hermosilla, Leticia Barrientos

Abstract:

The industry in general is very interested in improving and optimizing industrial processes in order to reduce the costs involved in obtaining raw materials and production. Thus, an interesting and cost-effective alternative is the incorporation of bioactive metabolites in such processes, being an example of this enzymes which catalyze efficiently a large number of enzymatic reactions of industrial and biotechnological interest. In the search for new sources of these active metabolites, Antarctica is one of the least explored places on our planet where the most drastic cold conditions, salinity, UVA-UVB and liquid water available are present, features that have shaped all life in this very harsh environment, especially bacteria that live in different Antarctic ecosystems, which have had to develop different strategies to adapt to these conditions, producing unique biochemical strategies. In this work the production of cellulolytic enzymes of seven bacterial strains isolated from marine sediments at different sites in the Antarctic was evaluated. Isolation of the strains was performed using serial dilutions in the culture medium at M115°C. The identification of the strains was performed using universal primers (27F and 1492R). The enzyme activity assays were performed on R2A medium, carboxy methyl cellulose (CMC)was added as substrate. Degradation of the substrate was revealed by adding Lugol. The results show that four of the tested strains produce enzymes which degrade CMC substrate. The molecular identifications, showed that these bacteria belong to the genus Streptomyces and Pseudoalteromonas, being Streptomyces strain who showed the highest activity. Only some bacteria in marine sediments have the ability to produce these enzymes, perhaps due to their greater adaptability to degrade at temperatures bordering zero degrees Celsius, some algae that are abundant in this environment and have cellulose as the main structure. The discovery of new enzymes adapted to cold is of great industrial interest, especially for paper, textiles, detergents, biofuels, food and agriculture. These enzymes represent 8% of industrial demand worldwide and is expected to increase their demand in the coming years. Mainly in the paper and food industry are required in extraction processes starch, protein and juices, as well as the animal feed industry where treating vegetables and grains helps improve the nutritional value of the food, all this clearly puts Antarctic microorganisms and their enzymes specifically as a potential contribution to industry and the novel biotechnological applications.

Keywords: antarctic, bacteria, biotechnological, cellulolytic enzymes

Procedia PDF Downloads 266
103 Exploratory Study to Obtain a Biolubricant Base from Transesterified Oils of Animal Fats (Tallow)

Authors: Carlos Alfredo Camargo Vila, Fredy Augusto Avellaneda Vargas, Debora Alcida Nabarlatz

Abstract:

Due to the current need to implement environmentally friendly technologies, the possibility of using renewable raw materials to produce bioproducts such as biofuels, or in this case, to produce biolubricant bases, from residual oils (tallow), originating has been studied of the bovine industry. Therefore, it is hypothesized that through the study and control of the operating variables involved in the reverse transesterification method, a biolubricant base with high performance is obtained on a laboratory scale using animal fats from the bovine industry as raw materials, as an alternative for material recovery and environmental benefit. To implement this process, esterification of the crude tallow oil must be carried out in the first instance, which allows the acidity index to be decreased ( > 1 mg KOH/g oil), this by means of an acid catalysis with sulfuric acid and methanol, molar ratio 7.5:1 methanol: tallow, 1.75% w/w catalyst at 60°C for 150 minutes. Once the conditioning has been completed, the biodiesel is continued to be obtained from the improved sebum, for which an experimental design for the transesterification method is implemented, thus evaluating the effects of the variables involved in the process such as the methanol molar ratio: improved sebum and catalyst percentage (KOH) over methyl ester content (% FAME). Finding that the highest percentage of FAME (92.5%) is given with a 7.5:1 methanol: improved tallow ratio and 0.75% catalyst at 60°C for 120 minutes. And although the% FAME of the biodiesel produced does not make it suitable for commercialization, it does ( > 90%) for its use as a raw material in obtaining biolubricant bases. Finally, once the biodiesel is obtained, an experimental design is carried out to obtain biolubricant bases using the reverse transesterification method, which allows the study of the effects of the biodiesel: TMP (Trimethylolpropane) molar ratio and the percentage of catalyst on viscosity and yield as response variables. As a result, a biolubricant base is obtained that meets the requirements of ISO VG (Classification for industrial lubricants according to ASTM D 2422) 32 (viscosity and viscosity index) for commercial lubricant bases, using a 4:1 biodiesel molar ratio: TMP and 0.51% catalyst at 120°C, at a pressure of 50 mbar for 180 minutes. It is necessary to highlight that the product obtained consists of two phases, a liquid and a solid one, being the first object of study, and leaving the classification and possible application of the second one incognito. Therefore, it is recommended to carry out studies of the greater depth that allows characterizing both phases, as well as improving the method of obtaining by optimizing the variables involved in the process and thus achieving superior results.

Keywords: biolubricant base, bovine tallow, renewable resources, reverse transesterification

Procedia PDF Downloads 95
102 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth

Authors: Elham Soltani Dehnavi

Abstract:

In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.

Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort

Procedia PDF Downloads 154
101 Cognitive Performance Post Stroke Is Affected by the Timing of Evaluation

Authors: Ayelet Hersch, Corrine Serfaty, Sigal Portnoy

Abstract:

Stroke survivors commonly report persistent fatigue and sleep disruptions during rehabilitation and post-recovery. While limited research has explored the impact of stroke on a patient's chronotype, there is a gap in understanding the differences in cognitive performance based on treatment timing. Study objectives: (a) To characterize the sleep chronotype in sub-acute post-stroke individuals. (b) Explore cognitive task performance differences during preferred and non-preferred hours. (c) Examine the relationships between sleep quality and cognitive performance. For this intra-subject study, twenty participants (mean age 60.2±8.6) post-first stroke (6-12 weeks post stroke) underwent assessments at preferred and non-preferred chronotypic times. The assessment included demographic surveys, the Munich Chronotype Questionnaire, Montreal Cognitive Assessment (MoCA), Rivermead Behavioral Memory Test (RBMT), a fatigue questionnaire, and 4-5 days of actigraphy (wrist-worn wGT3X-BT, ActiGraph) to record sleep characteristics. Four sleep quality indices were extracted from actigraphy wristwatch recordings: The average of total sleep time per day (minutes), the average number of awakenings during the sleep period per day, the efficiency of sleep (total hours of sleep per day divided by hours spent in bed per day, averaged across the days and presented as percentage), and the Wake after Sleep Onset (WASO) index, indicating the average number of minutes elapsed from the onset of sleep to the first awakening. Stroke survivors exhibited an earlier sleep chronotype post-injury compared to pre-injury. Enhanced attention, as indicated by higher RBMT scores, occurred during preferred hours. Specifically, 30% of the study participants demonstrated an elevation in their final scores during their preferred hours, transitioning from the category of "mild memory impairment" to "normal memory." However, no significant differences emerged in executive functions, attention tasks, and MoCA scores between preferred and non-preferred hours. The Wake After Sleep Onset (WASO) index correlated with MoCA/RBMT scores during preferred hours (r=0.53/0.51, p=0.021/0.027, respectively). The number of awakenings correlated with MoCA letter task performance during non-preferred hours (r=0.45, p=0.044). Enhanced attention during preferred hours suggests a potential relationship between chronotype and cognitive performance, highlighting the importance of personalized rehabilitation strategies in stroke care. Further exploration of these relationships could contribute to optimizing the timing of cognitive interventions for stroke survivors.

Keywords: sleep chronotype, chronobiology, circadian rhythm, rehabilitation timing

Procedia PDF Downloads 31
100 Compressed Natural Gas (CNG) Injector Research for Dual Fuel Engine

Authors: Adam Majczak, Grzegorz Barański, Marcin Szlachetka

Abstract:

Environmental considerations necessitate the search for new energy sources. One of the available solutions is a partial replacement of diesel fuel by compressed natural gas (CNG) in the compression ignition engines. This type of the engines is used mainly in vans and trucks. These units are also gaining more and more popularity in the passenger car market. In Europe, this part of the market share reaches 50%. Diesel engines are also used in industry in such vehicles as ship or locomotives. Diesel engines have higher emissions of nitrogen oxides in comparison to spark ignition engines. This can be currently limited by optimizing the combustion process and the use of additional systems such as exhaust gas recirculation or AdBlue technology. As a result of the combustion process of diesel fuel also particulate matter (PM) that are harmful to the human health are emitted. Their emission is limited by the use of a particulate filter. One of the method for toxic components emission reduction may be the use of liquid gas fuel such as propane and butane (LPG) or compressed natural gas (CNG). In addition to the environmental aspects, there are also economic reasons for the use of gaseous fuels to power diesel engines. A total or partial replacement of diesel gas is possible. Depending on the used technology and the percentage of diesel fuel replacement, it is possible to reduce the content of nitrogen oxides in the exhaust gas even by 30%, particulate matter (PM) by 95 % carbon monoxide and by 20%, in relation to original diesel fuel. The research object is prototype gas injector designed for direct injection of compressed natural gas (CNG) in compression ignition engines. The construction of the injector allows for it positioning in the glow plug socket, so that the gas is injected directly into the combustion chamber. The cycle analysis of the four-cylinder Andoria ADCR engine with a capacity of 2.6 dm3 for different crankshaft rotational speeds allowed to determine the necessary time for fuel injection. Because of that, it was possible to determine the required mass flow rate of the injector, for replacing as much of the original fuel by gaseous fuel. To ensure a high value of flow inside the injector, supply pressure equal to 1 MPa was applied. High gas supply pressure requires high value of valve opening forces. For this purpose, an injector with hydraulic control system, using a liquid under pressure for the opening process was designed. On the basis of air pressure measurements in the flow line after the injector, the analysis of opening and closing of the valve was made. Measurements of outflow mass of the injector were also carried out. The results showed that the designed injector meets the requirements necessary to supply ADCR engine by the CNG fuel.

Keywords: CNG, diesel engine, gas flow, gas injector

Procedia PDF Downloads 466
99 Applications of Multi-Path Futures Analyses for Homeland Security Assessments

Authors: John Hardy

Abstract:

A range of future-oriented intelligence techniques is commonly used by states to assess their national security and develop strategies to detect and manage threats, to develop and sustain capabilities, and to recover from attacks and disasters. Although homeland security organizations use future's intelligence tools to generate scenarios and simulations which inform their planning, there have been relatively few studies of the methods available or their applications for homeland security purposes. This study presents an assessment of one category of strategic intelligence techniques, termed Multi-Path Futures Analyses (MPFA), and how it can be applied to three distinct tasks for the purpose of analyzing homeland security issues. Within this study, MPFA are categorized as a suite of analytic techniques which can include effects-based operations principles, general morphological analysis, multi-path mapping, and multi-criteria decision analysis techniques. These techniques generate multiple pathways to potential futures and thereby generate insight into the relative influence of individual drivers of change, the desirability of particular combinations of pathways, and the kinds of capabilities which may be required to influence or mitigate certain outcomes. The study assessed eighteen uses of MPFA for homeland security purposes and found that there are five key applications of MPFA which add significant value to analysis. The first application is generating measures of success and associated progress indicators for strategic planning. The second application is identifying homeland security vulnerabilities and relationships between individual drivers of vulnerability which may amplify or dampen their effects. The third application is selecting appropriate resources and methods of action to influence individual drivers. The fourth application is prioritizing and optimizing path selection preferences and decisions. The fifth application is informing capability development and procurement decisions to build and sustain homeland security organizations. Each of these applications provides a unique perspective of a homeland security issue by comparing a range of potential future outcomes at a set number of intervals and by contrasting the relative resource requirements, opportunity costs, and effectiveness measures of alternative courses of action. These findings indicate that MPFA enhances analysts’ ability to generate tangible measures of success, identify vulnerabilities, select effective courses of action, prioritize future pathway preferences, and contribute to ongoing capability development in homeland security assessments.

Keywords: homeland security, intelligence, national security, operational design, strategic intelligence, strategic planning

Procedia PDF Downloads 119
98 Analyzing Bridge Response to Wind Loads and Optimizing Design for Wind Resistance and Stability

Authors: Abdul Haq

Abstract:

The goal of this research is to better understand how wind loads affect bridges and develop strategies for designing bridges that are more stable and resistant to wind. The effect of wind on bridges is essential to their safety and functionality, especially in areas that are prone to high wind speeds or violent wind conditions. The study looks at the aerodynamic forces and vibrations caused by wind and how they affect bridge construction. Part of the research method involves first understanding the underlying ideas influencing wind flow near bridges. Computational fluid dynamics (CFD) simulations are used to model and forecast the aerodynamic behaviour of bridges under different wind conditions. These models incorporate several factors, such as wind directionality, wind speed, turbulence intensity, and the influence of nearby structures or topography. The results provide significant new insights into the loads and pressures that wind places on different bridge elements, such as decks, pylons, and connections. Following the determination of the wind loads, the structural response of bridges is assessed. By simulating their dynamic behavior under wind-induced forces, Finite Element Analysis (FEA) is used to model the bridge's component parts. This work contributes to the understanding of which areas are at risk of experiencing excessive stresses, vibrations, or oscillations due to wind excitations. Because the bridge has inherent modes and frequencies, the study considers both static and dynamic responses. Various strategies are examined to maximize the design of bridges to withstand wind. It is possible to alter the bridge's geometry, add aerodynamic components, add dampers or tuned mass dampers to lessen vibrations, and boost structural rigidity. Through an analysis of several design modifications and their effectiveness, the study aims to offer guidelines and recommendations for wind-resistant bridge design. In addition to the numerical simulations and analyses, there are experimental studies. In order to assess the computational models and validate the practicality of proposed design strategies, scaled bridge models are tested in a wind tunnel. These investigations help to improve numerical models and prediction precision by providing valuable information on wind-induced forces, pressures, and flow patterns. Using a combination of numerical models, actual testing, and long-term performance evaluation, the project aims to offer practical insights and recommendations for building wind-resistant bridges that are secure, long-lasting, and comfortable for users.

Keywords: wind effects, aerodynamic forces, computational fluid dynamics, finite element analysis

Procedia PDF Downloads 38
97 Functional Surfaces and Edges for Cutting and Forming Tools Created Using Directed Energy Deposition

Authors: Michal Brazda, Miroslav Urbanek, Martina Koukolikova

Abstract:

This work focuses on the development of functional surfaces and edges for cutting and forming tools created through the Directed Energy Deposition (DED) technology. In the context of growing challenges in modern engineering, additive technologies, especially DED, present an innovative approach to manufacturing tools for forming and cutting. One of the key features of DED is its ability to precisely and efficiently deposit Fully dense metals from powder feedstock, enabling the creation of complex geometries and optimized designs. Gradually, it becomes an increasingly attractive choice for tool production due to its ability to achieve high precision while simultaneously minimizing waste and material costs. Tools created using DED technology gain significant durability through the utilization of high-performance materials such as nickel alloys and tool steels. For high-temperature applications, Nimonic 80A alloy is applied, while for cold applications, M2 tool steel is used. The addition of ceramic materials, such as tungsten carbide, can significantly increase the tool's resistance. The introduction of functionally graded materials is a significant contribution, opening up new possibilities for gradual changes in the mechanical properties of the tool and optimizing its performance in different sections according to specific requirements. In this work, you will find an overview of individual applications and their utilization in the industry. Microstructural analyses have been conducted, providing detailed insights into the structure of individual components alongside examinations of the mechanical properties and tool life. These analyses offer a deeper understanding of the efficiency and reliability of the created tools, which is a key element for successful development in the field of cutting and forming tools. The production of functional surfaces and edges using DED technology can result in financial savings, as the entire tool doesn't have to be manufactured from expensive special alloys. The tool can be made from common steel, onto which a functional surface from special materials can be applied. Additionally, it allows for tool repairs after wear and tear, eliminating the need for producing a new part and contributing to an overall cost while reducing the environmental footprint. Overall, the combination of DED technology, functionally graded materials, and verified technologies collectively set a new standard for innovative and efficient development of cutting and forming tools in the modern industrial environment.

Keywords: additive manufacturing, directed energy deposition, DED, laser, cutting tools, forming tools, steel, nickel alloy

Procedia PDF Downloads 21
96 Studies on Optimizing the Level of Liquid Biofertilizers in Peanut and Maize and Their Economic Analysis

Authors: Chandragouda R. Patil, K. S. Jagadeesh, S. D. Kalolgi

Abstract:

Biofertilizers containing live microbial cells can mobilize one or more nutrients to plants when applied to either seed or rhizosphere. They form an integral part of nutrient management strategies for sustainable production of agricultural crops. Annually, about 22 tons of lignite-based biofertilizers are being produced and supplied to farmers at the Institute of Organic Farming, University of Agricultural Sciences, Dharwad, Karnataka state India. Although carrier based biofertilizers are common, they have shorter shelf life, poor quality, high contamination, unpredictable field performance and high cost of solid carriers. Hence, liquid formulations are being developed to increase their efficacy and broaden field applicability. An attempt was made to develop liquid formulation of strains of Rhizobium NC-92 (Groundnut), Azospirillum ACD15 both nitrogen-fixing biofertilizers and Pseudomonas striata an efficient P-solubilizing bacteria (PSB). Different concentration of amendments such as additives (glycerol and polyethylene glycol), adjuvants (carboxyl methyl cellulose), gum arabica (GA), surfactant (polysorbate) and trehalose specifically for Azospirillum were found essential. Combinations of formulations of Rhizobium and PSB for groundnut and Azospirillum and PSB for maize were evaluated under field conditions to determine the optimum level of inoculum required. Each biofertilizer strain was inoculated at the rate of 2, 4, 8 ml per kg of seeds and the efficacy of each formulation both individually and in combinations was evaluated against the lignite-based formulation at the rate of 20 g each per kg seeds and a un-inoculated set was included to compare the inoculation effect. The field experiment had 17 treatments in three replicates and the best level of inoculum was decided based on net returns and cost: benefit ratio. In peanut, the combination of 4 ml of Rhizobium and 2 ml of PSB resulted in the highest net returns and higher cost to benefit ratio of 1:2.98 followed by treatment with a combination of 2 ml per kg each of Rhizobium and PSB with a B;C ratio of 1:2.84. The benefits in terms of net returns were to the extent of 16 percent due to inoculation with lignite based formulations while it was up to 48 percent due to the best combination of liquid biofertilizers. In maize combination of liquid formulations consisting of 4 ml of Azospirillum and 2 ml of PSB resulted in the highest net returns; about 53 percent higher than the un-inoculated control and 20 percent higher than the treatment with lignite based formulation. In both the crops inoculation with lignite based formulations significantly increased the net returns over un-inoculated control while levels higher or lesser than 4 ml of Rhizobium and Azospirillum and higher or lesser than 2 ml of PSB were not economical and hence not optimal for these two crops.

Keywords: Rhizobium, Azospirillum, phosphate solubilizing bacteria, liquid formulation, benefit-cost ratio

Procedia PDF Downloads 469
95 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 263
94 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.

Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer

Procedia PDF Downloads 120
93 Optimizing the Doses of Chitosan/Tripolyphosphate Loaded Nanoparticles of Clodinofop Propargyl and Fenoxaprop-P-Ethyl to Manage Avena Fatua L.: An Environmentally Safer Alternative to Control Weeds

Authors: Muhammad Ather Nadeem, Bilal Ahmad Khan, Hussam F. Najeeb Alawadi, Athar Mahmood, Aneela Nijabat, Tasawer Abbas, Muhammad Habib, Abdullah

Abstract:

The global prevalence of Avena fatua infestation poses a significant challenge to wheat sustainability. While chemical control stands out as an efficient and rapid way to control weeds, concerns over developing resistance in weeds and environmental pollution have led to criticisms of herbicide use. Consequently, this study was designed to address these challenges through the chemical synthesis, characterization, and optimization of chitosan-based nanoparticles containing clodinofop Propargyl and fenoxaprop-P-ethyl for the effective management of A. fatua. Utilizing the ionic gelification technique, chitosan-based nanoparticles of clodinofop Propargyl and fenoxaprop-P-ethyl were prepared. These nanoparticles were applied at the 3-4 leaf stage of Phalaris minor weed, applying seven altered doses. These nanoparticles were applied at the 3-4 leaf stage of Phalaris minor weed, applying seven altered doses (D0 (Check weeds), D1 (Recommended dose of traditional-herbicide (TH), D2 (Recommended dose of Nano-herbicide (NPs-H)), D3 (NPs-H with 05-fold lower dose), D4 ((NPs-H) with 10-fold lower dose), D5 (NPs-H with 15-fold lower dose), and D6 (NPs-H with 20-fold lower dose)). Characterization of the chitosan-containing herbicide nanoparticles (CHT-NPs) was conducted using FT-IR analysis, demonstrating a perfect match with standard parameters. UV–visible spectrum further revealed absorption peaks at 310 nm for NPs of clodinofop propargyl and at 330 nm for NPs of fenoxaprop-p-ethyl. This research aims to contribute to sustainable weed management practices by addressing the challenges associated with chemical herbicide use. The application of chitosan-based nanoparticles (CHT-NPs) containing fenoxaprop-P-ethyl and clodinofop-propargyl at the recommended dose of the standard herbicide resulted in 100% mortality and visible injury to weeds. Surprisingly, when applied at a lower dose with 5-folds, these chitosan-containing nanoparticles of clodinofop Propargyl and fenoxaprop-P-ethyl demonstrated extreme control efficacy. Furthermore, at a 10-fold lower dose compared to standard herbicides and the recommended dose of clodinofop-propargyl and fenoxaprop-P-ethyl, the chitosan-based nanoparticles exhibited comparable effects on chlorophyll content, visual injury (%), mortality (%), plant height (cm), fresh weight (g), and dry weight (g) of A. fatua. This study indicates that chitosan/tripolyphosphate-loaded nanoparticles containing clodinofop-propargyl and fenoxaprop-P-ethyl can be effectively utilized for the management of A. fatua at a 10-fold lower dose, highlighting their potential for sustainable and efficient weed control.

Keywords: mortality, chitosan-based nanoparticles, visual injury, chlorophyl contents, 5-fold lower dose.

Procedia PDF Downloads 29
92 The Higher Education Accreditation Foreign Experience for Ukraine

Authors: Dmytro Symak

Abstract:

The experience in other countries shows that, the role of accreditation of higher education as one of the types of quality assurance process for providing educational services increases. This was the experience of highly developed countries such as USA, Canada, France, Germany, because without proper quality assurance process is impossible to achieve a successful future of the nation and the state. In most countries, the function of Higher Education Accreditation performs public authorities, in particular, such as the Ministry of Education. In the US, however, the quality assurance process is independent on the government and implemented by private non-governmental organization - the Council of Higher Education Accreditation. In France, the main body that carries out accreditation of higher education is the Ministry of National Education. As part of the Bologna process is the mutual recognition and accreditation of degrees. While higher education institutions issue diplomas, but the ministry could award the title. This is the main level of accreditation awarded automatically by state universities. In total, there are in France next major level of accreditation of higher education: - accreditation for a visa: Accreditation second level; - recognition of accreditation: accreditation of third level. In some areas of education to accreditation ministry should adopt formal recommendations on specific organs. But there are also some exceptions. Thus, the French educational institutions, mainly large Business School, looking for non-French accreditation. These include, for example, the Association to Advance Collegiate Schools of Business, the Association of MBAs, the European Foundation for Management Development, the European Quality Improvement System, a prestigious EFMD Programme accreditation system. Noteworthy also German accreditation system of education. The primary here is a Conference of Ministers of Education and Culture of land in the Federal Republic of Germany (Kultusministerkonferenz or CCM) was established in 1948 by agreement between the States of the Federal Republic of Germany. Among its main responsibilities is to ensure quality and continuity of development in higher education. In Germany, the program of bachelors and masters must be accredited in accordance with Resolution Kultusministerkonerenz. In Ukraine Higher Education Accreditation carried out the Ministry of Education, Youth and Sports of Ukraine under four main levels. Ukraine's legislation on higher education based on the Constitution Ukraine consists of the laws of Ukraine ‘On osvititu’ ‘On scientific and technical activity’, ‘On Higher osvititu’ and other legal acts and is entirely within the competence of the state. This leads to considerable centralization and bureaucratization of the process. Thus, analysis of expertise shined can conclude that reforming the system of accreditation and quality of higher education in Ukraine to its integration into the global space requires solving a number of problems in the following areas: improving the system of state certification and licensing; optimizing the network of higher education institutions; creating both governmental and non-governmental organizations to monitor the process of higher education in Ukraine and so on.

Keywords: higher education, accreditation, decentralization, education institutions

Procedia PDF Downloads 310
91 Challenges beyond the Singapore Future-Ready School ‘LEADER’ Qualities

Authors: Zoe Boon Suan Loy

Abstract:

An exploratory research undertaken in 2000 at the beginning of the COVID-19 pandemic examined the changing roles of Singapore school leaders as they lead teachers in developing future-ready learners. While it is evident that ‘LEADER’ qualities epitomize the knowledge, competencies, and skills required, recent events in an increasing VUCA and BANI world characterized by massively disruptive Ukraine -Russian war, unabating tense US-Sino relations, issues related to sustainability, and rapid ageing will have an impact on school leadership. As an increasingly complex endeavour, this requires a relook as they lead teachers in nurturing holistically-developed future-ready students. Digitalisation, new technology, and the push for a green economy will be the key driving forces that will have an impact on job availability. Similarly, the rapid growth of artificial intelligence (AI) capabilities, including ChatGPT, will aggravate and add tremendous stress to the work of school leaders. This paper seeks to explore the key school leadership shifts required beyond the ‘LEADER’ qualities as school leaders respond to the changes, challenges, and opportunities in the 21st C new normal. The research findings for this paper are based on an exploratory qualitative study on the perceptions of 26 school leaders (vice-principals) who were attending a milestone educational leadership course at the National Institute of Education, Nanyang Technological University, Singapore. A structured questionnaire is designed to collect the data, which is then analysed using coding methodology. Broad themes on key competencies and skills of future-ready leaders in the Singapore education system are then identified. Key Findings: In undertaking their leadership roles as leaders of future-ready learners, school leaders need to demonstrate the ‘LEADER’ qualities. They need to have a long-term view, understand the educational imperatives, have a good awareness of self and the dispositions of a leader, be effective in optimizing external leverages and are clear about their role expectations. These ‘LEADER’ qualities are necessary and relevant in the post-Covid era. Beyond this, school leaders with ‘LEADER’ qualities are well supported by the Ministry of Education, which takes cognizance of emerging trends and continually review education policies to address related issues. Concluding Statement: Discussions within the education ecosystem and among other stakeholders on the implications of the use of artificial intelligence and ChatGPT on the school curriculum, including content knowledge, pedagogy, and assessment, are ongoing. This augurs well for school leaders as they undertake their responsibilities as leaders of future-ready learners.

Keywords: Singapore education system, ‘LEADER’ qualities, school leadership, future-ready leaders, future-ready learners

Procedia PDF Downloads 42
90 Achieving Them Both: Business and Wellness Outcomes in Health Organizations – the 'Tip' Laser Intervention

Authors: Shosh Kazaz, Shmuel Banai, Vered Zilberberg

Abstract:

Optimizing high business performance and employee's well-being simultaneously often challenges organizations. 'TIP' intervention enables achieving them both as the given project demonstrates. Increasing outcomes and improving performance were the initial motivators for this explorative project, followed by a request of the head of the Cardiology department: 'I know we are the best at our clinical practice, but we need to take it further and break our own glass ceiling.' Two guided interventions were conducted in two different units within the department, designed to implement advanced managerial and business-oriented tools, along with 'soft tools' based on coaching psychology and particularly wellness coaching. The organ department multi-disciplinary teams were assembled, aiming to manage and lead the process: mapping the patients' flow, creating solutions, implementing, assessing, improving and assimilating them. Approximately four months later, without additional external resources, meaningful results emerged by the teams in terms of business and performance: shortening the hospitalization length at a given procedure (from 7 to 2.1 days); increasing the availability of Catheterization laboratory by 16% daily – resulting profitability raise; improving patients' journey and experience. A year later, those results are maintained. Furthermore, interviews with the participants revealed positive perceptions regarding the department; a higher sense of joyfulness, connectedness, belonging and a better department climate were reported. Additionally, participants reported a higher sense of fulfillment as opposed to their earliest skepticism and cynicism about their ability to enhance outcomes without more resources (budget and/or manpower), experiencing a mindset change toward the possibility of leading personal and professional growth processes. These reports were supported by analyzing a set of questionnaires that the participants completed, parallel to a control group of non-participating colleagues. Although the assessment was taken a year after the completion of the project and during 'covid-19th-3rd national quarantine, the results indicated a significant impact on several personal parameters associated with wellness, compared to the control group. The participants were higher in self-efficacy and organizational commitment; men were higher in resilience and optimism and women were higher in well-being. In conclusion, the 'TIP' relatively short intervention integrates advanced managerial and wellness coaching tools, empowers organizational resources: Team, Individual and Process and by that generates multi-impact measurable results in terms of employee's wellness parameters along with business performance and patient care.

Keywords: coaching, health and wellness, health management, leadership and well-being

Procedia PDF Downloads 161
89 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China

Authors: Feng Yue, Fei Dai

Abstract:

With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.

Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture

Procedia PDF Downloads 134
88 Predicting Growth of Eucalyptus Marginata in a Mediterranean Climate Using an Individual-Based Modelling Approach

Authors: S.K. Bhandari, E. Veneklaas, L. McCaw, R. Mazanec, K. Whitford, M. Renton

Abstract:

Eucalyptus marginata, E. diversicolor and Corymbia calophylla form widespread forests in south-west Western Australia (SWWA). These forests have economic and ecological importance, and therefore, tree growth and sustainable management are of high priority. This paper aimed to analyse and model the growth of these species at both stand and individual levels, but this presentation will focus on predicting the growth of E. Marginata at the individual tree level. More specifically, the study wanted to investigate how well individual E. marginata tree growth could be predicted by considering the diameter and height of the tree at the start of the growth period, and whether this prediction could be improved by also accounting for the competition from neighbouring trees in different ways. The study also wanted to investigate how many neighbouring trees or what neighbourhood distance needed to be considered when accounting for competition. To achieve this aim, the Pearson correlation coefficient was examined among competition indices (CIs), between CIs and dbh growth, and selected the competition index that can best predict the diameter growth of individual trees of E. marginata forest managed under different thinning regimes at Inglehope in SWWA. Furthermore, individual tree growth models were developed using simple linear regression, multiple linear regression, and linear mixed effect modelling approaches. Individual tree growth models were developed for thinned and unthinned stand separately. The developed models were validated using two approaches. In the first approach, models were validated using a subset of data that was not used in model fitting. In the second approach, the model of the one growth period was validated with the data of another growth period. Tree size (diameter and height) was a significant predictor of growth. This prediction was improved when the competition was included in the model. The fit statistic (coefficient of determination) of the model ranged from 0.31 to 0.68. The model with spatial competition indices validated as being more accurate than with non-spatial indices. The model prediction can be optimized if 10 to 15 competitors (by number) or competitors within ~10 m (by distance) from the base of the subject tree are included in the model, which can reduce the time and cost of collecting the information about the competitors. As competition from neighbours was a significant predictor with a negative effect on growth, it is recommended including neighbourhood competition when predicting growth and considering thinning treatments to minimize the effect of competition on growth. These model approaches are likely to be useful tools for the conservations and sustainable management of forests of E. marginata in SWWA. As a next step in optimizing the number and distance of competitors, further studies in larger size plots and with a larger number of plots than those used in the present study are recommended.

Keywords: competition, growth, model, thinning

Procedia PDF Downloads 101
87 Investigation of Mass Transfer for RPB Distillation at High Pressure

Authors: Amiza Surmi, Azmi Shariff, Sow Mun Serene Lock

Abstract:

In recent decades, there has been a significant emphasis on the pivotal role of Rotating Packed Beds (RPBs) in absorption processes, encompassing the removal of Volatile Organic Compounds (VOCs) from groundwater, deaeration, CO2 absorption, desulfurization, and similar critical applications. The primary focus is elevating mass transfer rates, enhancing separation efficiency, curbing power consumption, and mitigating pressure drops. Additionally, substantial efforts have been invested in exploring the adaptation of RPB technology for offshore deployment. This comprehensive study delves into the intricacies of nitrogen removal under low temperature and high-pressure conditions, employing the high gravity principle via innovative RPB distillation concept with a specific emphasis on optimizing mass transfer. Based on the author's knowledge and comprehensive research, no cryogenic experimental testing was conducted to remove nitrogen via RPB. The research identifies pivotal process control factors through meticulous experimental testing, with pressure, reflux ratio, and reboil ratio emerging as critical determinants in achieving the desired separation performance. The results are remarkable, with nitrogen removal reaching less than one mole% in the Liquefied Natural Gas (LNG) product and less than three moles% methane in the nitrogen-rich gas stream. The study further unveils the mass transfer coefficient, revealing a noteworthy trend of decreasing Number of Transfer Units (NTU) and Area of Transfer Units (ATU) as the rotational speed escalates. Notably, the condenser and reboiler impose varying demands based on the operating pressure, with lower pressures at 12 bar requiring a more substantial duty than the 15-bar operation of the RPB. In pursuit of optimal energy efficiency, a meticulous sensitivity analysis is conducted, pinpointing the ideal combination of pressure and rotating speed that minimizes overall energy consumption. These findings underscore the efficiency of the RPB distillation approach in effecting efficient separation, even when operating under the challenging conditions of low temperature and high pressure. This achievement is attributed to a rigorous process control framework that diligently manages the operational pressure and temperature profile of the RPB. Nonetheless, the study's conclusions point towards the need for further research to address potential scaling challenges and associated risks, paving the way for the industrial implementation of this transformative technology.

Keywords: mass transfer coefficient, nitrogen removal, liquefaction, rotating packed bed

Procedia PDF Downloads 23
86 Coupling Random Demand and Route Selection in the Transportation Network Design Problem

Authors: Shabnam Najafi, Metin Turkay

Abstract:

Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.

Keywords: epsilon-constraint, multi-objective, network design, stochastic

Procedia PDF Downloads 617
85 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains

Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe

Abstract:

The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.

Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain

Procedia PDF Downloads 278
84 Getting It Right Before Implementation: Using Simulation to Optimize Recommendations and Interventions After Adverse Event Review

Authors: Melissa Langevin, Natalie Ward, Colleen Fitzgibbons, Christa Ramsey, Melanie Hogue, Anna Theresa Lobos

Abstract:

Description: Root Cause Analysis (RCA) is used by health care teams to examine adverse events (AEs) to identify causes which then leads to recommendations for prevention Despite widespread use, RCA has limitations. Best practices have not been established for implementing recommendations or tracking the impact of interventions after AEs. During phase 1 of this study, we used simulation to analyze two fictionalized AEs that occurred in hospitalized paediatric patients to identify and understand how the errors occurred and generated recommendations to mitigate and prevent recurrences. Scenario A involved an error of commission (inpatient drug error), and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated were: improved drug labeling, specialized drug kids, alert signs and clinical checklists. Aim: Use simulation to optimize interventions recommended post critical event analysis prior to implementation in the clinical environment. Methods: Suggested interventions from Phase 1 were designed and tested through scenario simulation in the clinical environment (medicine ward or pediatric intensive care unit). Each scenario was simulated 8 times. Recommendations were tested using different, voluntary teams and each scenario was debriefed to understand why the error was repeated despite interventions and how interventions could be improved. Interventions were modified with subsequent simulations until recommendations were felt to have an optimal effect and data saturation was achieved. Along with concrete suggestions for design and process change, qualitative data pertaining to employee communication and hospital standard work was collected and analyzed. Results: Each scenario had a total of three interventions to test. In, scenario 1, the error was reproduced in the initial two iterations and mitigated following key intervention changes. In scenario 2, the error was identified immediately in all cases where the intervention checklist was utilized properly. Independently of intervention changes and improvements, the simulation was beneficial to identify which of these should be prioritized for implementation and highlighted that even the potential solutions most frequently suggested by participants did not always translate into error prevention in the clinical environment. Conclusion: We conclude that interventions that help to change process (epinephrine kit or mandatory checklist) were more successful at preventing errors than passive interventions (signage, change in memory aids). Given that even the most successful interventions needed modifications and subsequent re-testing, simulation is key to optimizing suggested changes. Simulation is a safe, practice changing modality for institutions to use prior to implementing recommendations from RCA following AE reviews.

Keywords: adverse events, patient safety, pediatrics, root cause analysis, simulation

Procedia PDF Downloads 123
83 Industrial Waste Multi-Metal Ion Exchange

Authors: Thomas S. Abia II

Abstract:

Intel Chandler Site has internally developed its first-of-kind (FOK) facility-scale wastewater treatment system to achieve multi-metal ion exchange. The process was carried out using a serial process train of carbon filtration, pH / ORP adjustment, and cationic exchange purification to treat dilute metal wastewater (DMW) discharged from a substrate packaging factory. Spanning a trial period of 10 months, a total of 3,271 samples were collected and statistically analyzed (average baseline + standard deviation) to evaluate the performance of a 95-gpm, multi-reactor continuous copper ion exchange treatment system that was consequently retrofitted for manganese ion exchange to meet environmental regulations. The system is also equipped with an inline acid and hot caustic regeneration system to rejuvenate exhausted IX resins and occasionally remove surface crud. Data generated from lab-scale studies was transferred to system operating modifications following multiple trial-and-error experiments. Despite the DMW treatment system failing to meet internal performance specifications for manganese output, it was observed to remove the cation notwithstanding the prevalence of copper in the waste stream. Accordingly, the average manganese output declined from 6.5 + 5.6 mg¹L⁻¹ at pre-pilot to 1.1 + 1.2 mg¹L⁻¹ post-pilot (83% baseline reduction). This milestone was achieved regardless of the average influent manganese to DMW increasing from 1.0 + 13.7 mg¹L⁻¹ at pre-pilot to 2.1 + 0.2 mg¹L⁻¹ post-pilot (110% baseline uptick). Likewise, the pre-trial and post-trial average influent copper values to DMW were 22.4 + 10.2 mg¹L⁻¹ and 32.1 + 39.1 mg¹L⁻¹, respectively (43% baseline increase). As a result, the pre-trial and post-trial average copper output values were 0.1 + 0.5 mg¹L⁻¹ and 0.4 + 1.2 mg¹L⁻¹, respectively (300% baseline uptick). Conclusively, the operating pH range upstream of treatment (between 3.5 and 5) was shown to be the largest single point of influence for optimizing manganese uptake during multi-metal ion exchange. However, the high variability of the influent copper-to-manganese ratio was observed to adversely impact the system functionality. The journal herein intends to discuss the operating parameters such as pH and oxidation-reduction potential (ORP) that were shown to influence the functional versatility of the ion exchange system significantly. The literature also proposes to discuss limitations of the treatment system such as influent copper-to-manganese ratio variations, operational configuration, waste by-product management, and system recovery requirements to provide a balanced assessment of the multi-metal ion exchange process. The take-away from this literature is intended to analyze the overall feasibility of ion exchange for metals manufacturing facilities that lack the capability to expand hardware due to real estate restrictions, aggressive schedules, or budgetary constraints.

Keywords: copper, industrial wastewater treatment, multi-metal ion exchange, manganese

Procedia PDF Downloads 118
82 Mechanical Properties of Diamond Reinforced Ni Nanocomposite Coatings Made by Co-Electrodeposition with Glycine as Additive

Authors: Yanheng Zhang, Lu Feng, Yilan Kang, Donghui Fu, Qian Zhang, Qiu Li, Wei Qiu

Abstract:

Diamond-reinforced Ni matrix composite has been widely applied in engineering for coating large-area structural parts owing to its high hardness, good wear resistance and corrosion resistance compared with those features of pure nickel. The mechanical properties of Ni-diamond composite coating can be promoted by the high incorporation and uniform distribution of diamond particles in the nickel matrix, while the distribution features of particles are affected by electrodeposition process parameters, especially the additives in the plating bath. Glycine has been utilized as an organic additive during the preparation of pure nickel coating, which can effectively increase the coating hardness. Nevertheless, to author’s best knowledge, no research about the effects of glycine on the Ni-diamond co-deposition has been reported. In this work, the diamond reinforced Ni nanocomposite coatings were fabricated by a co-electrodeposition technique from a modified Watt’s type bath in the presence of glycine. After preparation, the SEM morphology of the composite coatings was observed combined with energy dispersive X-ray spectrometer, and the diamond incorporation was analyzed. The surface morphology and roughness were obtained by a three-dimensional profile instrument. 3D-Debye rings formed by XRD were analyzed to characterize the nickel grain size and orientation in the coatings. The average coating thickness was measured by a digital micrometer to deduce the deposition rate. The microhardness was tested by automatic microhardness tester. The friction coefficient and wear volume were measured by reciprocating wear tester to characterize the coating wear resistance and cutting performance. The experimental results confirmed that the presence of glycine effectively improved the surface morphology and roughness of the composite coatings. By optimizing the glycine concentration, the incorporation of diamond particles was increased, while the nickel grain size decreased with increasing glycine. The hardness of the composite coatings was increased as the glycine concentration increased. The friction and wear properties were evaluated as the glycine concentration was optimized, showing a decrease in the wear volume. The wear resistance of the composite coatings increased as the glycine content was increased to an optimum value, beyond which the wear resistance decreased. Glycine complexation contributed to the nickel grain refinement and improved the diamond dispersion in the coatings, both of which made a positive contribution to the amount and uniformity of embedded diamond particles, thus enhancing the microhardness, reducing the friction coefficient, and hence increasing the wear resistance of the composite coatings. Therefore, additive glycine can be used during the co-deposition process to improve the mechanical properties of protective coatings.

Keywords: co-electrodeposition, glycine, mechanical properties, Ni-diamond nanocomposite coatings

Procedia PDF Downloads 95
81 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture

Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz

Abstract:

Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.

Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV

Procedia PDF Downloads 77
80 Revolutionizing Healthcare Communication: The Transformative Role of Natural Language Processing and Artificial Intelligence

Authors: Halimat M. Ajose-Adeogun, Zaynab A. Bello

Abstract:

Artificial Intelligence (AI) and Natural Language Processing (NLP) have transformed computer language comprehension, allowing computers to comprehend spoken and written language with human-like cognition. NLP, a multidisciplinary area that combines rule-based linguistics, machine learning, and deep learning, enables computers to analyze and comprehend human language. NLP applications in medicine range from tackling issues in electronic health records (EHR) and psychiatry to improving diagnostic precision in orthopedic surgery and optimizing clinical procedures with novel technologies like chatbots. The technology shows promise in a variety of medical sectors, including quicker access to medical records, faster decision-making for healthcare personnel, diagnosing dysplasia in Barrett's esophagus, boosting radiology report quality, and so on. However, successful adoption requires training for healthcare workers, fostering a deep understanding of NLP components, and highlighting the significance of validation before actual application. Despite prevailing challenges, continuous multidisciplinary research and collaboration are critical for overcoming restrictions and paving the way for the revolutionary integration of NLP into medical practice. This integration has the potential to improve patient care, research outcomes, and administrative efficiency. The research methodology includes using NLP techniques for Sentiment Analysis and Emotion Recognition, such as evaluating text or audio data to determine the sentiment and emotional nuances communicated by users, which is essential for designing a responsive and sympathetic chatbot. Furthermore, the project includes the adoption of a Personalized Intervention strategy, in which chatbots are designed to personalize responses by merging NLP algorithms with specific user profiles, treatment history, and emotional states. The synergy between NLP and personalized medicine principles is critical for tailoring chatbot interactions to each user's demands and conditions, hence increasing the efficacy of mental health care. A detailed survey corroborated this synergy, revealing a remarkable 20% increase in patient satisfaction levels and a 30% reduction in workloads for healthcare practitioners. The poll, which focused on health outcomes and was administered to both patients and healthcare professionals, highlights the improved efficiency and favorable influence on the broader healthcare ecosystem.

Keywords: natural language processing, artificial intelligence, healthcare communication, electronic health records, patient care

Procedia PDF Downloads 47
79 Secure Optimized Ingress Filtering in Future Internet Communication

Authors: Bander Alzahrani, Mohammed Alreshoodi

Abstract:

Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.

Keywords: forwarding identifier, filling factor, information centric network, topology manager

Procedia PDF Downloads 131
78 Optimizing the Pair Carbon Xerogels-Electrolyte for High Performance Supercapacitors

Authors: Boriana Karamanova, Svetlana Veleva, Luybomir Soserov, Ana Arenillas, Francesco Lufrano, Antonia Stoyanova

Abstract:

Supercapacitors have received a lot of research attention and are promising energy storage devices due to their high power and long cycle life. In order to developed an advanced device with significant capacity for storing charge and cheap carbon materials, efforts must focus not only on improving synthesis by controlling the morphology and pore size but also on improving electrode-electrolyte compatibility of the resulting systems. The present study examines the relationship between the surface chemistry of two activated carbon xerogels, the electrolyte type, and the electrochemical properties of supercapacitors. Activated carbon xerogels were prepared by varying the initial pH of the resorcinol-formaldehyde aqueous solution. The materials produced are physicochemical characterized by DTA/TGA, porous characterization, and SEM analysis. The carbon xerogel based electrodes were prepared by spreading over glass plate a slurry containing the carbon gel, graphite, and poly vinylidene difluoride (PVDF) binder. The layer formed was dried consecutively at different temperatures and then detached by water. After, the layer was dried again to improve its mechanical stability. The developed electrode materials and the Aquivion® E87-05S membrane (Solvay Specialty Polymers), socked in Na2SO4 as a polymer electrolyte, were used to assembly the solid-state supercapacitor. Symmetric supercapacitor cells composed by same electrodes and 1 M KOH electrolytes are also assembled and tested for comparison. The supercapacitor performances are verified by different electrochemical methods - cyclic voltammetry, galvanostatic charge/discharge measurements, electrochemical impedance spectroscopy, and long-term durability tests in neutral and alkaline electrolytes. Specific capacitances, energy, and power density, energy efficiencies, and durability were compared into studied supercapacitors. Ex-situ physicochemical analyses on the synthesized materials have also been performed, which provide information about chemical and structural changes in the electrode morphology during charge / discharge durability tests. They are discussed on the basis of electrode-electrolyte interaction. The obtained correlations could be of significance in order to design sustainable solid-state supercapacitors with high power and energy density. Acknowledgement: This research is funded by the Ministry of Education and Science of Bulgaria under the National Program "European Scientific Networks" (Agreement D01-286 / 07.10.2020, D01-78/30.03.2021). Authors gratefully acknowledge.

Keywords: carbon xerogel, electrochemical tests, neutral and alkaline electrolytes, supercapacitors

Procedia PDF Downloads 107
77 Chebyshev Collocation Method for Solving Heat Transfer Analysis for Squeezing Flow of Nanofluid in Parallel Disks

Authors: Mustapha Rilwan Adewale, Salau Ayobami Muhammed

Abstract:

This study focuses on the heat transfer analysis of magneto-hydrodynamics (MHD) squeezing flow between parallel disks, considering a viscous incompressible fluid. The upper disk exhibits both upward and downward motion, while the lower disk remains stationary but permeable. By employing similarity transformations, a system of nonlinear ordinary differential equations is derived to describe the flow behavior. To solve this system, a numerical approach, namely the Chebyshev collocation method, is utilized. The study investigates the influence of flow parameters and compares the obtained results with existing literature. The significance of this research lies in understanding the heat transfer characteristics of MHD squeezing flow, which has practical implications in various engineering and industrial applications. By employing the similarity transformations, the complex governing equations are simplified into a system of nonlinear ordinary differential equations, facilitating the analysis of the flow behavior. To obtain numerical solutions for the system, the Chebyshev collocation method is implemented. This approach provides accurate approximations for the nonlinear equations, enabling efficient computations of the heat transfer properties. The obtained results are compared with existing literature, establishing the validity and consistency of the numerical approach. The study's major findings shed light on the influence of flow parameters on the heat transfer characteristics of the squeezing flow. The analysis reveals the impact of parameters such as magnetic field strength, disk motion amplitude, fluid viscosity on the heat transfer rate between the disks, the squeeze number(S), suction/injection parameter(A), Hartman number(M), Prandtl number(Pr), modified Eckert number(Ec), and the dimensionless length(δ). These findings contribute to a comprehensive understanding of the system's behavior and provide insights for optimizing heat transfer processes in similar configurations. In conclusion, this study presents a thorough heat transfer analysis of magneto-hydrodynamics squeezing flow between parallel disks. The numerical solutions obtained through the Chebyshev collocation method demonstrate the feasibility and accuracy of the approach. The investigation of flow parameters highlights their influence on heat transfer, contributing to the existing knowledge in this field. The agreement of the results with previous literature further strengthens the reliability of the findings. These outcomes have practical implications for engineering applications and pave the way for further research in related areas.

Keywords: squeezing flow, magneto-hydro-dynamics (MHD), chebyshev collocation method(CCA), parallel manifolds, finite difference method (FDM)

Procedia PDF Downloads 47